Skip to content

feat: migrate from ai/rsc to ai/ui (useChat)#579

Open
ngoiyaeric wants to merge 2 commits intomainfrom
feat/ai-sdk-ui-migration
Open

feat: migrate from ai/rsc to ai/ui (useChat)#579
ngoiyaeric wants to merge 2 commits intomainfrom
feat/ai-sdk-ui-migration

Conversation

@ngoiyaeric
Copy link
Copy Markdown
Collaborator

@ngoiyaeric ngoiyaeric commented Apr 24, 2026

Summary

  • Replaces experimental ai/rsc pattern (server-pushed React components via createStreamableUI) with the stable ai/ui pattern (useChat hook + route handler + client-side rendering)
  • Creates /api/chat/stream route handler using streamText + data stream protocol, replacing the submit server action from createAI
  • Migrates all 23 files that used ai/rsc APIs — zero remaining ai/rsc imports after this PR

Key Changes

Server (6 files):

  • app/api/chat/stream/route.ts — New streaming endpoint with task manager -> researcher -> writer -> query suggestor orchestration
  • lib/agents/tools/* — Tools return JSON data only (removed uiStream dependency)
  • lib/agents/* — Agents return data/stream results only (removed UI rendering)

Client (15 files):

  • components/chat-provider.tsx — New ChatProvider wrapping useChat for shared state
  • components/tool-result-renderer.tsx — Client-side tool result -> UI component mapping
  • All components migrated from useUIState/useAIState/useActions/useStreamableValue to useChatContext
  • BotMessage accepts string instead of StreamableValue<string>
  • CollapsibleMessage accepts boolean instead of StreamableValue<boolean>

Pages (2 files):

  • app/page.tsx, app/search/[id]/page.tsx<AI> provider replaced with <ChatProvider>

Removed:

  • createAI, createStreamableUI, createStreamableValue, getMutableAIState, getAIState
  • useUIState, useAIState, useActions, useStreamableValue, readStreamableValue
  • getUIStateFromAIState reconstruction function
  • onSetAIState/onGetUIState callbacks

Stats: 29 files changed, 830 insertions, 1525 deletions

Test plan

  • Verify npx next build passes (confirmed in this PR)
  • Test basic chat flow: send message -> receive streamed response
  • Test tool invocations: search results, video search, geospatial queries render correctly
  • Test inquiry/copilot flow: task manager triggers inquiry -> user responds -> proceeds
  • Test resolution search: map capture -> analysis -> GeoJSON overlay
  • Test related queries: appear after responses, clicking sends follow-up
  • Test chat persistence: messages saved to DB, reload shows history
  • Test mobile layout: icons bar, new chat, attachment flow
  • grep -rn "ai/rsc" returns zero results

Closes #59

Generated with Claude Code

Summary by CodeRabbit

  • Refactor

    • Chat state moved to a client-side provider for snappier interactions and simplified UI updates.
    • Message rendering and controls unified under the new chat context, reducing latency and improving reliability.
  • New Features

    • Authenticated streaming chat API enabling resolution searches (image-based), tool-result annotations, and related-query suggestions.
    • Improved single-shot suggestion and search flows for more consistent suggestion/results display.

Replace the experimental ai/rsc streaming-UI pattern with the stable
ai/ui pattern using useChat hook and route handler. This removes all
server-side React component streaming in favor of client-side rendering.

- Create /api/chat/stream route handler with streamText
- Create ChatProvider wrapping useChat for shared chat state
- Refactor all tools to return JSON data (no uiStream)
- Refactor all agents to return data (no UI rendering)
- Migrate all client components from RSC hooks to useChat
- Create ToolResultRenderer for client-side tool UI mapping
- Remove createAI, createStreamableUI, StreamableValue usage
- Zero remaining ai/rsc imports

Closes #59

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented Apr 24, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
qcx Ready Ready Preview, Comment Apr 25, 2026 10:45am

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Walkthrough

Replaces RSC-based streaming AI actions with a client-side ChatProvider and a new authenticated /api/chat/stream endpoint; removes the old app/actions.tsx server/action module and migrates agents/tools from incremental UI-streaming to single-shot result-returning functions. UI components now use chat context and plain Message[] props.

Changes

Cohort / File(s) Summary
Core removal
app/actions.tsx
Removed entire RSC actions module and its exports (AI, submit, clearChat, AIState, UIState, getUIStateFromAIState).
New streaming API
app/api/chat/stream/route.ts
Added authenticated POST handler exporting maxDuration and POST; routes requests for resolution_search, hardcoded debug responses, and general inquire/research/proceed/writer flows; streams annotations and persists via saveChat.
Providers / pages
app/page.tsx, app/search/[id]/page.tsx
Replaced <AI> provider with ChatProvider using chatId/initialMessages; updated imports and message mapping to Message[].
Chat context
components/chat-provider.tsx
Added ChatProvider, useChatContext, and Annotation interface; hooks append, setMessages, getToolResults, getInquiry, getRelatedQueries provided.
Message rendering
components/chat-messages.tsx, components/collapsible-message.tsx, components/message.tsx, components/tool-result-renderer.tsx
Switched from UIState/streamable values to Message[]/plain props; removed useStreamableValue; added ToolResultRenderer.
Interaction components
components/chat-panel.tsx, components/copilot.tsx, components/followup-panel.tsx, components/header-search-button.tsx, components/mobile-icons-bar.tsx, components/resolution-carousel.tsx
Replaced RSC useActions/FormData submissions with useChatContext().append; file/image handling converted to base64 DataURLs; clearing chat uses setMessages + router navigation.
Search/result UI
components/search-related.tsx, components/search-section.tsx, components/video-search-section.tsx
Converted streamed/streamable props to synchronous string/object props; added JSON parse guards and simplified rendering; removed stream hooks.
Agents (single-shot)
lib/agents/inquire.tsx, lib/agents/query-suggestor.tsx, lib/agents/researcher.tsx, lib/agents/writer.tsx
Removed UI-stream params and incremental UI emission; agents now return final structured values/strings and build segments without emitting RSC UI.
Tools & infra
lib/agents/tools/index.tsx, lib/agents/tools/*.tsx
Removed uiStream/fullResponse from ToolProps and tool factories; tools now return structured results or { error } objects without inline UI streaming or component rendering.
Suggestions
lib/actions/suggest.ts
getSuggestions now returns Promise<PartialRelated> from generateObject (no streamable value).

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client (Browser)
  participant Provider as ChatProvider (client)
  participant API as /api/chat/stream (server)
  participant Task as taskManager / researcher / writer
  participant DB as saveChat (DB)

  rect rgba(200,230,255,0.5)
    Client->>Provider: User submits message (+ optional fileData)
    Provider->>API: POST { chatId, messages, body }
  end

  rect rgba(200,255,200,0.5)
    API->>Task: route by action (resolution_search / inquire / proceed)
    Task-->>API: streaming annotations & final answer (tool_result, 0:, related)
    API->>Client: streaming frames (annotations, answer, terminator)
  end

  rect rgba(255,230,200,0.5)
    API->>DB: saveChat(conversation) (async)
    DB-->>API: ack
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

Backend, Review effort 4/5

Suggested reviewers

  • CharlieHelps

Poem

🐰
I hopped through streams and left a trail,
Now context carries every tale,
No RSC burrows in the ground,
ChatProvider springs up all around,
A crunchy carrot of code—hip-hop—refactor bound!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: migrate from ai/rsc to ai/ui (useChat)' clearly and specifically describes the main change in the PR: migrating the codebase from ai/rsc to ai/ui using the useChat hook.
Linked Issues check ✅ Passed The PR fully addresses issue #59's objective to migrate from experimental ai/rsc APIs to production-ready ai/ui (useChat). New /api/chat/stream endpoint replaces createAI, ChatProvider replaces RSC state management, and all 23 files using ai/rsc APIs have been successfully migrated.
Out of Scope Changes check ✅ Passed All changes directly support the migration objective: server-side refactoring (new streaming route, agent simplification), client-side updates (ChatProvider, hook migrations), and component adaptations. No unrelated features or technical debt fixes appear outside the ai/rsc-to-ai/ui scope.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

📋 Issue Planner

Built with CodeRabbit's Coding Plans for faster development and fewer bugs.

View plan used: #59

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/ai-sdk-ui-migration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Migrate from ai/rsc to ai/ui pattern with useChat hook and route handler

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Migrate from experimental ai/rsc to stable ai/ui pattern using useChat hook
• Create /api/chat/stream route handler with streamText for server-side streaming
• Implement ChatProvider context wrapping useChat for shared chat state management
• Refactor all tools and agents to return JSON data instead of UI components
• Create ToolResultRenderer for client-side tool result to UI component mapping
• Remove all createAI, createStreamableUI, StreamableValue dependencies
Diagram
flowchart LR
  A["ai/rsc Pattern<br/>Server Actions"] -->|Migrate| B["ai/ui Pattern<br/>useChat Hook"]
  C["createStreamableUI<br/>createStreamableValue"] -->|Replace| D["ChatProvider<br/>Context API"]
  E["Server Actions<br/>submit/clearChat"] -->|Replace| F["/api/chat/stream<br/>Route Handler"]
  G["Tools return UI<br/>Components"] -->|Refactor| H["Tools return JSON<br/>Data only"]
  I["useUIState<br/>useAIState"] -->|Replace| J["useChatContext<br/>Hook"]
  F -->|Streams| D
  D -->|Provides| J
  H -->|Rendered by| K["ToolResultRenderer<br/>Component"]
Loading

Grey Divider

File Changes

1. app/api/chat/stream/route.ts ✨ Enhancement +359/-0

New streaming endpoint with orchestration logic

app/api/chat/stream/route.ts


2. components/chat-provider.tsx ✨ Enhancement +97/-0

New context provider wrapping useChat hook

components/chat-provider.tsx


3. app/actions.tsx 🐞 Bug fix +2/-842

Remove all ai/rsc imports and server actions

app/actions.tsx


View more (26)
4. components/chat.tsx ✨ Enhancement +20/-34

Migrate from useUIState/useAIState to useChatContext

components/chat.tsx


5. components/chat-panel.tsx ✨ Enhancement +19/-46

Replace form submission with append method

components/chat-panel.tsx


6. components/chat-messages.tsx ✨ Enhancement +96/-49

Render messages from useChat and annotations

components/chat-messages.tsx


7. components/tool-result-renderer.tsx ✨ Enhancement +46/-0

New component for client-side tool result rendering

components/tool-result-renderer.tsx


8. lib/agents/researcher.tsx ✨ Enhancement +18/-41

Remove uiStream and streamText parameters

lib/agents/researcher.tsx


9. lib/agents/writer.tsx ✨ Enhancement +3/-18

Remove UI streaming, return text only

lib/agents/writer.tsx


10. lib/agents/inquire.tsx ✨ Enhancement +16/-33

Remove uiStream, return inquiry data only

lib/agents/inquire.tsx


11. lib/agents/tools/geospatial.tsx ✨ Enhancement +9/-81

Remove uiStream dependency, return JSON data

lib/agents/tools/geospatial.tsx


12. lib/agents/tools/search.tsx ✨ Enhancement +2/-24

Remove uiStream, return search results only

lib/agents/tools/search.tsx


13. lib/agents/tools/retrieve.tsx ✨ Enhancement +14/-42

Remove uiStream, return retrieved content only

lib/agents/tools/retrieve.tsx


14. lib/agents/tools/video-search.tsx ✨ Enhancement +2/-24

Remove uiStream, return video results only

lib/agents/tools/video-search.tsx


15. lib/agents/tools/index.tsx ✨ Enhancement +6/-21

Update getTools to not pass uiStream

lib/agents/tools/index.tsx


16. lib/actions/suggest.ts ✨ Enhancement +15/-19

Remove createStreamableValue, return suggestions directly

lib/actions/suggest.ts


17. components/header-search-button.tsx ✨ Enhancement +30/-44

Replace useActions with useChatContext append

components/header-search-button.tsx


18. components/copilot.tsx ✨ Enhancement +11/-43

Remove useStreamableValue, use context append

components/copilot.tsx


19. components/search-related.tsx ✨ Enhancement +5/-28

Replace useActions with useChatContext

components/search-related.tsx


20. components/followup-panel.tsx ✨ Enhancement +10/-24

Replace useActions with useChatContext

components/followup-panel.tsx


21. components/message.tsx ✨ Enhancement +2/-11

Accept string instead of StreamableValue

components/message.tsx


22. components/collapsible-message.tsx ✨ Enhancement +2/-5

Accept boolean instead of StreamableValue

components/collapsible-message.tsx


23. components/search-section.tsx ✨ Enhancement +3/-5

Accept string instead of StreamableValue

components/search-section.tsx


24. components/video-search-section.tsx ✨ Enhancement +3/-5

Accept string instead of StreamableValue

components/video-search-section.tsx


25. components/resolution-carousel.tsx ✨ Enhancement +15/-21

Replace useActions with useChatContext

components/resolution-carousel.tsx


26. components/mobile-icons-bar.tsx ✨ Enhancement +2/-6

Replace useActions with useChatContext

components/mobile-icons-bar.tsx


27. app/page.tsx ✨ Enhancement +4/-5

Replace AI provider with ChatProvider

app/page.tsx


28. app/search/[id]/page.tsx ✨ Enhancement +19/-40

Replace AI provider with ChatProvider

app/search/[id]/page.tsx


29. lib/agents/query-suggestor.tsx Additional files +0/-14

...

lib/agents/query-suggestor.tsx


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review Bot commented Apr 24, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Resolution results not rendered 🐞 Bug ≡ Correctness
Description
/api/chat/stream emits a resolution_search_result data annotation, but the client never renders
that annotation type, so resolution-search UI (carousel/GeoJSON/map preview) will not appear.
Code

components/chat-messages.tsx[R77-100]

+  // Render inquiry annotation if present
+  const inquiry = annotations.find((a: Annotation) => a.type === 'inquiry')
+  if (inquiry) {
+    const { Copilot } = require('./copilot')
+    renderedMessages.push({
+      id: 'inquiry',
+      component: <Copilot inquiry={{ value: inquiry.data }} />
+    })
+  }
+
+  // Render related queries annotation
+  const related = annotations.findLast?.((a: Annotation) => a.type === 'related')
+  if (related && related.relatedQueries?.items?.length > 0) {
+    const SearchRelated = require('./search-related').default
+    const { Section: SectionComp } = require('./section')
+    renderedMessages.push({
+      id: 'related',
+      component: (
+        <SectionComp title="Related" separator={true}>
+          <SearchRelated relatedQueries={related.relatedQueries} />
+        </SectionComp>
+      )
+    })
+  }
Evidence
The server sends a resolution_search_result annotation, but ChatMessages only renders
tool_result, inquiry, and related annotations, so resolution search results are dropped on the
floor.

app/api/chat/stream/route.ts[235-260]
components/chat-messages.tsx[77-100]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Resolution search results are sent as `type: 'resolution_search_result'` annotations, but the UI never renders them, so users only see the summary text and lose the map/imagery output.
## Issue Context
The route handler emits a `resolution_search_result` annotation containing `{ image, mapboxImage, googleImage, geoJson?, ... }`. The client currently only handles `tool_result`, `inquiry`, and `related` annotations.
## Fix Focus Areas
- components/chat-messages.tsx[77-100]
- app/api/chat/stream/route.ts[235-260]
## What to implement
- Add a renderer branch for `annotation.type === 'resolution_search_result'`.
- Reuse existing UI components (e.g., `ResolutionCarousel`, `GeoJsonLayer`) to display imagery + optional GeoJSON.
- Ensure rendering order matches expectations (e.g., show carousel/overlay before/alongside summary).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Tool error crashes UI🐞 Bug ☼ Reliability
Description
Tools now return { error: string } objects on failure, but the UI assumes successful shapes and
will throw at runtime (e.g., SearchResults calls .slice() on undefined).
Code

components/tool-result-renderer.tsx[R14-24]

+export function ToolResultRenderer({ toolName, result }: ToolResultRendererProps) {
+  if (!result) return null
+
+  switch (toolName) {
+    case 'search':
+      return <SearchSection result={JSON.stringify(result)} />
+    case 'videoSearch':
+      return <VideoSearchSection result={JSON.stringify(result)} />
+    case 'retrieve':
+      return <RetrieveSection data={result} />
+    case 'geospatialQueryTool': {
Evidence
Tool executors return { error: ... }, but ToolResultRenderer forwards these into
SearchSection/VideoSearchSection/RetrieveSection. Those components then treat the parsed value
as a full result object and pass undefined arrays into SearchResults, which immediately uses
.slice()/.length() and will crash.

lib/agents/tools/search.tsx[45-54]
lib/agents/tools/retrieve.tsx[14-35]
lib/agents/tools/video-search.tsx[16-27]
components/tool-result-renderer.tsx[14-45]
components/search-section.tsx[14-34]
components/search-results.tsx[9-23]
components/retrieve-section.tsx[6-15]
components/video-search-section.tsx[13-22]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
When a tool fails, executors return `{ error: string }`. Current renderers assume success shapes and crash when required fields (e.g., `results`) are missing.
## Issue Context
- Tools: `searchTool`, `retrieveTool`, `videoSearchTool` return `{ error: ... }`.
- UI: `ToolResultRenderer` always renders success components.
- `SearchResults` expects `results: []` and uses `.slice()` and `.length()`.
## Fix Focus Areas
- components/tool-result-renderer.tsx[14-45]
- components/search-section.tsx[14-34]
- components/video-search-section.tsx[13-22]
- components/retrieve-section.tsx[6-15]
- lib/agents/tools/search.tsx[45-54]
- lib/agents/tools/retrieve.tsx[14-35]
- lib/agents/tools/video-search.tsx[16-27]
## What to implement
- In `ToolResultRenderer`, detect `result?.error` and render a safe error UI (e.g., a `Section` with an error message) instead of delegating to the success components.
- (Optional hardening) Add defensive checks in `SearchSection`/`VideoSearchSection`/`RetrieveSection` to avoid assuming `results` is present.
- Keep the returned error objects as-is (don’t crash) so the assistant response can still render.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Attachment not sent🐞 Bug ≡ Correctness
Description
ChatPanel allows submit with a selected file but always appends a message with input.trim() and
never includes selectedFile, so file attachments are silently ignored and file-only submits send
an empty user message.
Code

components/chat-panel.tsx[R89-104]

     return
   }

-    const content: ({ type: 'text'; text: string } | { type: 'image'; image: string })[] = []
-    if (input) {
-      content.push({ type: 'text', text: input })
-    }
-    if (selectedFile && selectedFile.type.startsWith('image/')) {
-      content.push({
-        type: 'image',
-        image: URL.createObjectURL(selectedFile)
-      })
-    }
-
-    setMessages(currentMessages => [
-      ...currentMessages,
-      {
-        id: nanoid(),
-        component: <UserMessage content={content} />
-      }
-    ])
-
-    const formData = new FormData(e.currentTarget)
-    if (selectedFile) {
-      formData.append('file', selectedFile)
-    }
-
-    // Include drawn features in the form data
-    formData.append('drawnFeatures', JSON.stringify(mapData.drawnFeatures || []))
-
+    const userContent = input.trim()
   setInput('')
   clearAttachment()

-    const responseMessage = await submit(formData)
-    setMessages(currentMessages => [...currentMessages, responseMessage as any])
+    await append(
+      { role: 'user', content: userContent },
+      {
+        body: {
+          mapProvider,
+          drawnFeatures: mapData.drawnFeatures || [],
+        }
+      }
+    )
Evidence
selectedFile is only used in the submission guard and then cleared; it is never read/serialized
into the append() call body or message content. If the user submits with only a file,
userContent becomes an empty string but the submit proceeds because selectedFile is truthy.

components/chat-panel.tsx[86-105]
components/chat-panel.tsx[184-190]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The UI still supports file selection, but the selected file is never transmitted to the API. This breaks attachments and also produces empty user messages when submitting with only a file.
## Issue Context
`handleSubmit` checks `selectedFile` but constructs the outgoing message using only `input.trim()`.
## Fix Focus Areas
- components/chat-panel.tsx[86-105]
- app/api/chat/stream/route.ts[45-52]
## What to implement
- If `selectedFile` is present:
- Convert it to a data URL (image) or string (text/plain).
- Include it in the request (e.g., `body.fileData` + `body.fileMimeType`) and/or include a multimodal `content` representation that the server will pass through.
- Prevent appending an empty `content` message when the file is the only input.
- Update the route handler to incorporate the attached file into the last user message (e.g., build a `CoreMessage['content']` array with `{type:'text'}` and `{type:'image'}` parts).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (2)
4. Base64 images saved to DB 🐞 Bug ➹ Performance
Description
Resolution-search pushes fileData (a base64 data URL) into CoreMessage.content and
saveChatAsync persists it verbatim; saveChat then JSON-stringifies it into the
messages.content text column, risking extreme DB bloat and slow history loads.
Code

app/api/chat/stream/route.ts[R209-214]

+  const content: CoreMessage['content'] = [
+    { type: 'text', text: 'Analyze this map view.' },
+    { type: 'image', image: fileData, mimeType: 'image/png' }
+  ]
+  messages.push({ role: 'user', content })
+
Evidence
The route adds an image part with image: fileData and later saveChatAsync stores msg.content
directly. The persistence layer stringifies object content into a text column, so the full base64
payload is stored in the database without truncation/sanitization.

app/api/chat/stream/route.ts[209-214]
app/api/chat/stream/route.ts[296-304]
lib/actions/chat.ts[106-112]
lib/db/schema.ts[52-61]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Resolution-search base64 images are being persisted to the `messages` table, which can massively increase DB size and make chat history loads slow or fail.
## Issue Context
- The route inserts `{ type: 'image', image: fileData }` into `CoreMessage.content`.
- `saveChatAsync` persists that `content` verbatim.
- `saveChat` JSON-stringifies object content into `messages.content` (text).
## Fix Focus Areas
- app/api/chat/stream/route.ts[209-214]
- app/api/chat/stream/route.ts[274-337]
- lib/actions/chat.ts[106-112]
## What to implement
- Before calling `saveChatAsync` (or inside it), sanitize message contents:
- Remove image parts entirely from messages being saved, OR
- Replace the `image` field with a constant placeholder (e.g., `"IMAGE_PROCESSED"`).
- If you still need to persist images, store them outside the `messages` table (blob storage) and only persist a reference URL/id.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. History drops tool outputs 🐞 Bug ≡ Correctness
Description
Chat history pages load DB messages directly into useChat, but ChatMessages only renders
user/assistant roles and the DB schema doesn’t store tool metadata (e.g., toolName), so
persisted tool outputs and other non-text artifacts can’t be reconstructed on reload.
Code

app/search/[id]/page.tsx[R41-46]

+  const initialMessages: Message[] = dbMessages.map((dbMsg): Message => ({
+    id: dbMsg.id,
+    role: dbMsg.role as Message['role'],
+    content: dbMsg.content,
+    createdAt: dbMsg.createdAt ? new Date(dbMsg.createdAt) : undefined,
+  }));
Evidence
The history page maps all DB messages (role/content) into Message[] without
filtering/transforming. The renderer ignores non-user/assistant roles, and the DB schema
contains only role and content (no name/type), so tool results (and other structured
messages) are effectively lost for rendering after reload.

app/search/[id]/page.tsx[39-46]
components/chat-messages.tsx[38-75]
lib/db/schema.ts[52-61]
lib/actions/chat.ts[106-112]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Tool outputs and other non-text UI artifacts are not renderable from persisted chat history: the DB doesn’t store tool metadata, and the UI ignores non-user/assistant roles.
## Issue Context
- History page loads `dbMessages` into `Message[]`.
- `ChatMessages` only renders `user` and `assistant`.
- DB schema stores only `role` + `content` (text), so toolName/type/annotation details are lost.
## Fix Focus Areas
- app/search/[id]/page.tsx[39-46]
- components/chat-messages.tsx[38-75]
- lib/db/schema.ts[52-61]
- lib/actions/chat.ts[106-112]
## What to implement
Choose one approach:
1) **Schema approach (recommended)**: add columns to persist `type` and `name` (toolName) (and any structured payload refs), then rehydrate tool messages into UI via `ToolResultRenderer`.
2) **Content-encoding approach**: persist a JSON envelope in `content` that includes `{type,name,payload}` and parse it on load; update `ChatMessages` to render those envelopes.
Additionally:
- Filter/transform loaded messages so the UI can render tool results and other artifacts deterministically.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

6. Random keys remount tool UI🐞 Bug ⚙ Maintainability
Description
Tool-result annotations use Math.random() in their ids/keys, causing React to remount tool result
components every render and potentially reset collapse/open state.
Code

components/chat-messages.tsx[R28-35]

+  // Render tool result annotations first (they come before the text)
+  const toolAnnotations = annotations.filter((a: Annotation) => a.type === 'tool_result')
+  for (const ann of toolAnnotations) {
+    renderedMessages.push({
+      id: `tool-${ann.toolName}-${Math.random().toString(36).slice(2)}`,
+      component: <ToolResultRenderer toolName={ann.toolName} result={ann.result} />,
+      isCollapsed: true
+    })
Evidence
The rendered message id includes a random suffix, so React keys are unstable across renders and will
force unmount/mount cycles for tool result rows.

components/chat-messages.tsx[28-36]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`Math.random()` is used to construct ids/keys for tool annotation rendering, making keys unstable and causing unnecessary remounts.
## Issue Context
Tool results are derived from `annotations` and rendered into a list keyed by `id`.
## Fix Focus Areas
- components/chat-messages.tsx[28-36]
## What to implement
- Replace the random-based id with a deterministic key derived from the annotation:
- e.g., `${toolName}-${index}` while iterating,
- or include a stable `toolCallId`/sequence number in the annotation emitted by the server and key off that.
- Ensure keys remain stable for identical annotation sequences across renders.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

Comment on lines +77 to +100
// Render inquiry annotation if present
const inquiry = annotations.find((a: Annotation) => a.type === 'inquiry')
if (inquiry) {
const { Copilot } = require('./copilot')
renderedMessages.push({
id: 'inquiry',
component: <Copilot inquiry={{ value: inquiry.data }} />
})
}

// Render related queries annotation
const related = annotations.findLast?.((a: Annotation) => a.type === 'related')
if (related && related.relatedQueries?.items?.length > 0) {
const SearchRelated = require('./search-related').default
const { Section: SectionComp } = require('./section')
renderedMessages.push({
id: 'related',
component: (
<SectionComp title="Related" separator={true}>
<SearchRelated relatedQueries={related.relatedQueries} />
</SectionComp>
)
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Resolution results not rendered 🐞 Bug ≡ Correctness

/api/chat/stream emits a resolution_search_result data annotation, but the client never renders
that annotation type, so resolution-search UI (carousel/GeoJSON/map preview) will not appear.
Agent Prompt
## Issue description
Resolution search results are sent as `type: 'resolution_search_result'` annotations, but the UI never renders them, so users only see the summary text and lose the map/imagery output.

## Issue Context
The route handler emits a `resolution_search_result` annotation containing `{ image, mapboxImage, googleImage, geoJson?, ... }`. The client currently only handles `tool_result`, `inquiry`, and `related` annotations.

## Fix Focus Areas
- components/chat-messages.tsx[77-100]
- app/api/chat/stream/route.ts[235-260]

## What to implement
- Add a renderer branch for `annotation.type === 'resolution_search_result'`.
- Reuse existing UI components (e.g., `ResolutionCarousel`, `GeoJsonLayer`) to display imagery + optional GeoJSON.
- Ensure rendering order matches expectations (e.g., show carousel/overlay before/alongside summary).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread components/tool-result-renderer.tsx
Comment thread components/chat-panel.tsx Outdated
Comment on lines +209 to +214
const content: CoreMessage['content'] = [
{ type: 'text', text: 'Analyze this map view.' },
{ type: 'image', image: fileData, mimeType: 'image/png' }
]
messages.push({ role: 'user', content })

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Base64 images saved to db 🐞 Bug ➹ Performance

Resolution-search pushes fileData (a base64 data URL) into CoreMessage.content and
saveChatAsync persists it verbatim; saveChat then JSON-stringifies it into the
messages.content text column, risking extreme DB bloat and slow history loads.
Agent Prompt
## Issue description
Resolution-search base64 images are being persisted to the `messages` table, which can massively increase DB size and make chat history loads slow or fail.

## Issue Context
- The route inserts `{ type: 'image', image: fileData }` into `CoreMessage.content`.
- `saveChatAsync` persists that `content` verbatim.
- `saveChat` JSON-stringifies object content into `messages.content` (text).

## Fix Focus Areas
- app/api/chat/stream/route.ts[209-214]
- app/api/chat/stream/route.ts[274-337]
- lib/actions/chat.ts[106-112]

## What to implement
- Before calling `saveChatAsync` (or inside it), sanitize message contents:
  - Remove image parts entirely from messages being saved, OR
  - Replace the `image` field with a constant placeholder (e.g., `"IMAGE_PROCESSED"`).
- If you still need to persist images, store them outside the `messages` table (blob storage) and only persist a reference URL/id.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread app/search/[id]/page.tsx Outdated
Comment on lines +41 to +46
const initialMessages: Message[] = dbMessages.map((dbMsg): Message => ({
id: dbMsg.id,
role: dbMsg.role as Message['role'],
content: dbMsg.content,
createdAt: dbMsg.createdAt ? new Date(dbMsg.createdAt) : undefined,
}));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. History drops tool outputs 🐞 Bug ≡ Correctness

Chat history pages load DB messages directly into useChat, but ChatMessages only renders
user/assistant roles and the DB schema doesn’t store tool metadata (e.g., toolName), so
persisted tool outputs and other non-text artifacts can’t be reconstructed on reload.
Agent Prompt
## Issue description
Tool outputs and other non-text UI artifacts are not renderable from persisted chat history: the DB doesn’t store tool metadata, and the UI ignores non-user/assistant roles.

## Issue Context
- History page loads `dbMessages` into `Message[]`.
- `ChatMessages` only renders `user` and `assistant`.
- DB schema stores only `role` + `content` (text), so toolName/type/annotation details are lost.

## Fix Focus Areas
- app/search/[id]/page.tsx[39-46]
- components/chat-messages.tsx[38-75]
- lib/db/schema.ts[52-61]
- lib/actions/chat.ts[106-112]

## What to implement
Choose one approach:
1) **Schema approach (recommended)**: add columns to persist `type` and `name` (toolName) (and any structured payload refs), then rehydrate tool messages into UI via `ToolResultRenderer`.
2) **Content-encoding approach**: persist a JSON envelope in `content` that includes `{type,name,payload}` and parse it on load; update `ChatMessages` to render those envelopes.

Additionally:
- Filter/transform loaded messages so the UI can render tool results and other artifacts deterministically.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 37

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
lib/agents/tools/geospatial.tsx (2)

37-41: ⚠️ Potential issue | 🟠 Major

Avoid logging secret prefixes.

Logging the first 8 characters of COMPOSIO_API_KEY, MAPBOX_ACCESS_TOKEN, and COMPOSIO_USER_ID still leaks information to every log aggregator that ingests server logs, and key prefixes are often enough to identify the account/tier (some providers encode tenancy in the prefix). Use a boolean presence indicator instead.

🛡️ Proposed fix
   console.log('[GeospatialTool] Environment check:', {
-    composioApiKey: composioApiKey ? `${composioApiKey.substring(0, 8)}...` : 'MISSING',
-    mapboxAccessToken: mapboxAccessToken ? `${mapboxAccessToken.substring(0, 8)}...` : 'MISSING',
-    composioUserId: composioUserId ? `${composioUserId.substring(0, 8)}...` : 'MISSING',
+    composioApiKey: composioApiKey ? 'present' : 'MISSING',
+    mapboxAccessToken: mapboxAccessToken ? 'present' : 'MISSING',
+    composioUserId: composioUserId ? 'present' : 'MISSING',
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/tools/geospatial.tsx` around lines 37 - 41, The console.log in the
GeospatialTool environment check is leaking secret prefixes by printing
substrings of composioApiKey, mapboxAccessToken, and composioUserId; change this
to log only boolean presence checks (e.g., !!composioApiKey,
!!mapboxAccessToken, !!composioUserId) or descriptive status strings like
"present"/"missing" in the same console.log call so no parts of secrets are
emitted (locate the block referencing composioApiKey, mapboxAccessToken,
composioUserId and replace the substring masking with presence indicators).

50-57: 🧹 Nitpick | 🔵 Trivial

Prefer static JSON import over require().

require('../../../mapbox_mcp_config.json') bypasses module resolution checks and won't be tree-shaken or type-checked. The project's TypeScript config has "resolveJsonModule": true enabled, allowing direct static JSON imports. The nested try/catch is also redundant—the inner catch simply rethrows, and the config variable is never used after assignment, making it safe to remove entirely.

♻️ Proposed refactor
+import mapboxMcpConfig from '../../../mapbox_mcp_config.json';
@@
-  let config;
-  try {
-    let mapboxMcpConfig;
-    try {
-      mapboxMcpConfig = require('../../../mapbox_mcp_config.json');
-      config = { ...mapboxMcpConfig, mapboxAccessToken };
-      console.log('[GeospatialTool] Config loaded successfully');
-    } catch (configError: any) {
-      throw configError;
-    }
-  } catch (configError: any) {
-    console.error('[GeospatialTool] Failed to load mapbox config:', configError.message);
-    config = { mapboxAccessToken, version: '1.0.0', name: 'mapbox-mcp-server' };
-    console.log('[GeospatialTool] Using fallback config');
-  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/tools/geospatial.tsx` around lines 50 - 57, Replace the dynamic
require of the mapbox JSON and the redundant try/catch with a static JSON import
using the project's resolveJsonModule support: remove the
require('../../../mapbox_mcp_config.json') call and the surrounding try/catch,
statically import the JSON (mapbox_mcp_config.json) at the top of the module,
drop the unused local variable config and/or mapboxMcpConfig if not used
elsewhere, and use the imported object directly where needed (referencing
mapbox_mcp_config.json and any code that previously referenced mapboxMcpConfig
or config).
lib/agents/tools/retrieve.tsx (1)

9-35: ⚠️ Potential issue | 🟠 Major

Robustness gaps in the error path and missing timeout/status check.

Two issues worth addressing while this execute function is being rewritten:

  1. Line 34: \${error} "${url}".`coerces theErrorviatoString(), which for some runtimes produces "[object Object]"or loses the underlyingcause`. Extract the message explicitly.
  2. Lines 10–17: No response.ok check and no AbortController timeout on fetch. A hung Jina endpoint will stall the tool (and therefore the whole chat turn) indefinitely; a 4xx/5xx with a non-JSON body falls through to the generic catch, masking the real status.
🛡️ Suggested hardening
   execute: async ({ url }: { url: string }) => {
     let results: SearchResultsType | undefined
+    const controller = new AbortController()
+    const timeout = setTimeout(() => controller.abort(), 15_000)
     try {
       const response = await fetch(`https://r.jina.ai/${url}`, {
         method: 'GET',
         headers: {
           Accept: 'application/json',
           'X-With-Generated-Alt': 'true'
-        }
+        },
+        signal: controller.signal
       })
+      if (!response.ok) {
+        return { error: `Retrieve failed for "${url}" (HTTP ${response.status}).` }
+      }
       const json = await response.json()
       if (!json.data || json.data.length === 0) {
         return { error: `An error occurred while retrieving "${url}". This website may not be supported.` }
       }
       results = {
         results: [
           {
             title: json.data.title,
             content: json.data.content,
             url: json.data.url
           }
         ],
         query: '',
         images: []
       }
     } catch (error) {
       console.error('Retrieve API error:', error)
-      return { error: `${error} "${url}".` }
+      const message = error instanceof Error ? error.message : String(error)
+      return { error: `Failed to retrieve "${url}": ${message}` }
+    } finally {
+      clearTimeout(timeout)
     }

     return results
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/tools/retrieve.tsx` around lines 9 - 35, The catch path is
coercing the Error object into a string and the fetch has no status or timeout
handling; update the fetch call (where you call
fetch(`https://r.jina.ai/${url}`) and call response.json()) to use an
AbortController with a short timeout (e.g., 5s), check response.ok and return a
clear error when status is not ok (include response.status and statusText),
safely parse JSON with a try/catch and handle non-JSON bodies, and in the catch
block return the actual error.message (and error.cause if present) rather than
using `${error}` so the returned error includes the real message and URL.
components/mobile-icons-bar.tsx (1)

42-44: ⚠️ Potential issue | 🟠 Major

setMessages([]) alone is insufficient—the user remains on /search/[oldId] with the old chatId, causing the next message to append to the previous conversation.

The Chat component only auto-navigates to /search/[id] when messages.length reaches 1 and the user is not already on a search route (line 65–67 in chat.tsx: if (!path.includes('search') && messages.length === 1)). When a user clicks "New Chat" on an existing chat page, this condition cannot trigger, so no navigation occurs. The next message will be sent with the stale chatId and persisted to the old chat.

Add router.push('/') to navigate back to home (which generates a fresh id), or generate a new ID and navigate to /search/[newId]. Also, remove the async keyword from handleNewChat since there are no awaits.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/mobile-icons-bar.tsx` around lines 42 - 44, handleNewChat
currently only calls setMessages([]) which leaves the user on the old
/search/[oldId] so subsequent messages append to the previous chat; update
handleNewChat (remove the unnecessary async) to also navigate away so a fresh
chatId is created — either call router.push('/') after clearing messages or
generate a new id and call router.push(`/search/${newId}`) (keep setMessages([])
as-is), ensuring you reference the handleNewChat function and the Chat
component's routing behavior when making the change.
components/search-section.tsx (1)

14-39: ⚠️ Potential issue | 🟠 Major

JSON.parse without error handling, and tool-error shape isn't handled.

Two concerns on the new synchronous parse:

  1. JSON.parse(result) will throw on any malformed input; a single bad tool output would crash the surrounding chat message render (and potentially the whole message list). Wrap in a try/catch.
  2. Per the sibling changes in lib/agents/tools/search.tsx, a failed search now returns { error: "An error occurred while searching for \"<query>\"." }. That object is a valid JSON parse, but it has no query/images/results, so ${searchResults.query} renders undefined and <SearchResults results={searchResults.results} /> receives undefined. An explicit error branch would render a proper error state instead of a broken skeleton / crash.
🛡️ Proposed fix
-export function SearchSection({ result }: SearchSectionProps) {
-  const searchResults: TypeSearchResults | undefined = result ? JSON.parse(result) : undefined
-  return (
-    <div>
-      {searchResults ? (
+export function SearchSection({ result }: SearchSectionProps) {
+  let parsed: (TypeSearchResults & { error?: string }) | undefined
+  if (result) {
+    try {
+      parsed = JSON.parse(result)
+    } catch (e) {
+      console.error('SearchSection: failed to parse result JSON', e)
+    }
+  }
+  const searchResults = parsed && !('error' in parsed) ? parsed : undefined
+  return (
+    <div>
+      {parsed && 'error' in parsed ? (
+        <Section className="pt-2 pb-0">
+          <p className="text-sm text-muted-foreground">{parsed.error}</p>
+        </Section>
+      ) : searchResults ? (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/search-section.tsx` around lines 14 - 39, Wrap the JSON.parse in
SearchSection with a try/catch and fall back to a safe undefined/parseError
state instead of letting a thrown parse error bubble out; after parsing,
explicitly check for an error shape (e.g., searchResults?.error) and render an
error branch (for example show a ToolBadge or a Section with the error message)
rather than attempting to use searchResults.query, searchResults.images, or
searchResults.results when they may be undefined; update the JSX branches in
SearchSection to use the parseError / error-object checks so
SearchResultsImageSection and SearchResults only receive valid arrays/strings.
lib/agents/tools/index.tsx (1)

12-24: 🧹 Nitpick | 🔵 Trivial

LGTM — clean removal of RSC streaming plumbing.

Factory signatures align with the updated searchTool/retrieveTool/videoSearchTool/geospatialTool definitions. Minor nit: tools: any loses the strong typing that ToolSet from the ai package would give here — consider Record<string, ReturnType<typeof searchTool> | ...> or the SDK's tool type helper if you want streamText({ tools }) to catch signature regressions at compile time.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/tools/index.tsx` around lines 12 - 24, The tools object in
getTools is typed as any which loses compile-time guarantees; replace tools: any
with a stronger type such as the SDK's ToolSet (from 'ai') or a union Record
type using ReturnType for each factory (e.g., Record<string, ReturnType<typeof
searchTool> | ReturnType<typeof retrieveTool> | ReturnType<typeof
videoSearchTool> | ReturnType<typeof geospatialTool>>) so streamText({ tools })
and other callers get proper type checking; update the signature of getTools
(and ToolProps if needed) to reflect the chosen type.
components/chat-panel.tsx (1)

86-105: ⚠️ Potential issue | 🔴 Critical

Attached files are silently dropped on submit.

The UI accepts and displays attachments (lines 64–73, 265–276) but handleSubmit only sends { role: 'user', content: userContent }selectedFile is cleared (clearAttachment()) without being uploaded or included in the request body. The submit button is even enabled when only a file is attached (input.length === 0 && !selectedFile on Line 257), so a user can attach a file, click send, and get an empty message with no indication that the file was discarded.

The API route handler at app/api/chat/stream already expects and supports fileData in the request body (destructured at line 31) and processes it via handleResolutionSearch when action === 'resolution_search'. Encode selectedFile to base64 and include it as fileData in the request body, along with an action: 'resolution_search' parameter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/chat-panel.tsx` around lines 86 - 105, handleSubmit currently
clears selectedFile via clearAttachment() and only appends the user text; modify
handleSubmit to include the attached file by base64-encoding selectedFile and
passing it as fileData in the append call body alongside mapProvider and
drawnFeatures, and include action: 'resolution_search' so the backend route
(app/api/chat/stream) can process the file; ensure you still clearAttachment()
and setInput('') after the append completes, and keep the submit enabled when
only a file is attached by using the existing selectedFile check.
app/actions.tsx (1)

1-9: 🧹 Nitpick | 🔵 Trivial

Remove or relocate app/actions.tsx — file is unused and misleadingly named.

The file contains only a type export (AIState) and is never imported anywhere in the codebase. The .tsx extension is misleading since there is no JSX or Server Actions present. Either delete the file or move the AIState type to @/lib/types (where AIMessage already resides). The "backward compatibility" comment does not reflect current usage — nothing imports from this file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/actions.tsx` around lines 1 - 9, The file exports only the AIState type
(referencing AIMessage) and is unused and misnamed as a .tsx; either delete
app/actions.tsx or move the AIState type into the existing types module where
AIMessage lives (e.g., add export type AIState = { messages: AIMessage[];
chatId: string; isSharePage?: boolean } to the types file and remove the
redundant file), and update any imports if you relocate the type.
lib/agents/researcher.tsx (1)

134-143: ⚠️ Potential issue | 🟠 Major

messages is mutated in place — caller's retry loop will compound context.

researcher pushes the assistant turn (and any tool responses) onto the caller's messages array. In app/api/chat/stream/route.ts this function is called inside a while loop that re-invokes it until answer.length > 0, so each retry iteration re-sends a growing conversation containing the previous empty-text assistant turn plus its tool calls. This can cause (a) token bloat, (b) the model repeating the same tool call, and (c) the loop failing to converge.

Recommend returning the new segments from researcher and letting the caller decide whether to append (only once, after a successful iteration), or resetting messages length before each retry.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/researcher.tsx` around lines 134 - 143, The researcher function
currently mutates the caller's messages array by pushing the assistant turn and
tool responses (see messages, toolCalls, toolResponses, fullResponse), causing
retries to compound context; change researcher to avoid in-place mutation by
building and returning the new message segments (e.g., newSegments containing
the assistant message and any tool messages) instead of pushing into messages,
update the return to include those segments (alongside result, fullResponse,
hasError, toolResponses), and let the caller decide whether to append
newSegments to messages (only after a successful, non-empty fullResponse) or to
reset messages before retries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/api/chat/stream/route.ts`:
- Line 41: currentSystemPrompt is passed directly to researcher and never
augmented with RAG context; call the RAG augmentation flow (use the functions
from lib/actions/rag.ts such as generate_embedding and hybrid_search or your
existing wrapper that returns augmentedSystemPrompt) after fetching
getSystemPrompt(userId) and before invoking researcher, then pass
augmentedSystemPrompt to researcher instead of currentSystemPrompt (reference
symbols: currentSystemPrompt, getSystemPrompt, researcher, generate_embedding,
hybrid_search, augmentedSystemPrompt).
- Around line 45-56: The current mapping that builds messages (variable messages
from clientMessages) strips tool-related data by filtering only roles
'user'|'assistant' and mapping to {role, content}, which loses
assistant.toolInvocations and drops any role === 'tool' turns; update the
construction of CoreMessage[] (the messages array creation) to preserve tool
turns and toolInvocations by: not filtering out role === 'tool', and when
mapping each item (from clientMessages) include toolInvocations and any other
auxiliary fields present on assistant messages (e.g., map to include role,
content, and toolInvocations when present), or add a short clarifying comment if
intentionally discarding tool data, so prior tool outputs and assistant tool
calls are replayed correctly.
- Around line 69-92: The hardcoded reply branch uses fragile exact-match logic
and inconsistent wording; update it to normalize the last user message (trim,
collapse whitespace, remove trailing punctuation and lowercase) before matching
instead of comparing raw lastText, and choose wording consistent with the system
prompt (use "Climate forecasting" everywhere) when building the definition
variable; also guard the branch by asserting messages.length > 0 (or
early-return) before accessing messages[messages.length - 1], and keep the call
to saveChatAsync(chatId, userId, messages, definition) after these checks;
alternatively consider moving these canned replies into the system prompt in
lib/agents/researcher.tsx and only use this branch as a fallback.
- Around line 195-272: handleResolutionSearch currently starts a ReadableStream
whose start callback can throw and those errors escape the outer try/catch, and
saveChatAsync is fire-and-forget; fix by making the stream start an async
function that wraps the body in try/catch/finally, calling controller.error(err)
on any caught error and ensuring controller.close() in success, and in finally
await saveChatAsync(chatId, userId, messages, analysisResult?.summary || '') (or
save an error marker) so chat persistence always runs; refer to
handleResolutionSearch, resolutionSearch, querySuggestor, saveChatAsync and use
controller.error(...) inside the start callback to propagate stream failures to
the response consumer.
- Around line 15-19: The 401 response in POST (app/api/chat/stream/route.ts ->
function POST) returns JSON body but omits Content-Type; update the Response
creation for the unauthorized branch to include the header "Content-Type:
application/json" (or use a JSON helper such as NextResponse.json) so the
response explicitly declares JSON content when returning the Unauthorized
payload.
- Around line 152-156: The main path calls querySuggestor with messages which
may still contain image parts; mirror the sanitisation used in
handleResolutionSearch by creating a sanitizedMessages (filtering out any
message.content parts with type 'image' or equivalent image payloads) and pass
that to querySuggestor instead of raw messages; update the relatedQueries
assignment (variable relatedQueries and call to querySuggestor) to use the
filtered messages so image data is never sent to querySuggestor.
- Around line 129-145: The loop in route.ts using researcher(...) can spin
indefinitely and compounds context because researcher mutates the shared
messages array; add a bounded retry mechanism (e.g., maxAttempts constant and an
attempts counter) and break the loop when attempts >= maxAttempts or when a
retry returns empty fullResponse N times (regardless of useSpecificAPI), and
ensure errorOccurred still influences exit where appropriate; also prevent
in-place mutation of messages by either passing a shallow copy of messages into
researcher(...) or refactoring researcher (in lib/agents/researcher.tsx) to
return the additional assistant/tool-turns instead of pushing them into the
provided messages array so retries don’t inflate context. Ensure you update
usages of answer, errorOccurred, and allToolOutputs accordingly so retries stop
after the maxAttempts and do not duplicate tool outputs.
- Line 113: The stream is sending annotation frames using the "8:[...]" prefix
so annotations end up on individual messages instead of in useChat's data field;
update every controller.enqueue(encoder.encode(`8:[${JSON.stringify(...) }]\n`))
call (the ones emitting tool_result, inquiry, related, resolution_search_result
payloads) to use the data-frame prefix "2:[...]" instead (i.e.,
controller.enqueue(encoder.encode(`2:[${JSON.stringify(...)}]\n`))) so these
payloads populate useChat's data array and get returned by getToolResults(),
getInquiry(), etc.
- Around line 89-91: The streaming Response objects in this route (the four
occurrences that call "new Response(stream, { headers: { 'Content-Type':
'text/plain; charset=utf-8' } })") must include the required protocol header;
update each Response return to add "x-vercel-ai-data-stream": "v1" to the
headers alongside Content-Type so the AI SDK (useChat) can recognize and parse
the stream. Locate the four Response constructions in this file (the streaming
returns around the areas noted) and merge the headers into a single headers
object that includes both 'Content-Type' and 'x-vercel-ai-data-stream': 'v1'.
- Around line 297-317: The saveChatAsync flow is generating new nanoid() for
every message (in the messages -> aiMessages mapping) which breaks dbSaveChat's
onConflictDoUpdate deduping; update the mapping in route.ts to preserve message
identity by using msg.id when present (e.g., aiMessages.push uses msg.id) and
only call nanoid() for messages that lack an id, and ensure tool outputs keep
stable IDs similarly; alternatively (if preserving incoming ids isn't possible),
call the db routine to delete prior messages for the chat before inserting the
replayed history — reference the messages array, aiMessages, nanoid(),
saveChatAsync and dbSaveChat to locate the changes.
- Around line 187-192: The three fire-and-forget calls to saveChatAsync
(occurring just before returning streamed Responses) must be wrapped with
Next.js after() to ensure the DB write completes after the response is sent;
replace direct calls like saveChatAsync(chatId, userId, ...) with after(() =>
saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries))
for each occurrence (the saveChatAsync invocations in the streaming route
handlers) so the runtime will wait for the save to finish even in serverless
environments.
- Line 80: The stream termination currently enqueues only an "e:" finish-step
and immediately closes the controller (see usages of
controller.enqueue(encoder.encode(...)) and controller.close()), which omits the
required "d:" finish-message; after each existing e: message emit a second
controller.enqueue(encoder.encode(...)) that sends the "d:" chunk with the same
finishReason and usage payload (matching the existing e: content) and then call
controller.close(); update all four occurrences where e: is emitted so useChat
receives the final d: message before stream close.

In `@app/search/`[id]/page.tsx:
- Around line 41-46: The initialMessages mapping drops annotations (and possible
toolInvocations), and unsafely casts dbMsg.role, causing
ToolResultRenderer/getToolResults and SearchRelated/getRelatedQueries to have no
data and useChat to get invalid roles; update the mapping in initialMessages to
include annotations and toolInvocations from dbMsg (rehydrate the JSONB column
you added to messages) and validate dbMsg.role against the allowed Message roles
used by useChat (e.g., check against a set like
['user','assistant','system','tool'] and fallback to a safe default) so
annotations are preserved and role values are constrained.

In `@components/chat-messages.tsx`:
- Around line 29-36: The code duplicates provider logic by re-filtering
annotations into toolAnnotations; instead call the provider helper
getToolResults(...) (from the context exposed in chat-provider via
getToolResults) to obtain tool results, then iterate those results to push
entries into renderedMessages with id, component (ToolResultRenderer) and
isCollapsed. Remove the local annotations.filter(...) and use getToolResults to
keep logic centralized and consistent.
- Around line 104-114: The isLastMessage prop is computed from index ===
renderedMessages.length - 1 which can point at inquiry/related/tool rows;
instead, compute the index of the final assistant text/turn when building
renderedMessages and pass isLastMessage only for that assistant message row.
Locate where renderedMessages is created and where CollapsibleMessage is
rendered, determine the last assistant message by scanning renderedMessages for
assistant-type entries (e.g., entries with role/type identifying assistant text)
to get lastAssistantIndex, then change the prop to isLastMessage={index ===
lastAssistantIndex} so only the true assistant response controls
auto-expansion/streaming in CollapsibleMessage.
- Around line 22-36: The keys for ToolResultRenderer are unstable because they
use Math.random(), causing remounts; change the id generation in the
renderedMessages push (the block that builds toolAnnotations and pushes entries
into renderedMessages) to use a stable identifier instead of Math.random()—for
example use an existing annotation unique field (like ann.id), or derive a
deterministic key from ann.toolName plus the annotation index or a hash of
ann.result, and ensure that same stable id is passed as the key to
ToolResultRenderer so React reconciliation preserves component state.
- Around line 77-100: Replace the dynamic require() calls inside the render with
static imports at the top: import Copilot, SearchRelated (or its named export)
and Section (named) instead of using require('./copilot'),
require('./search-related').default and require('./section'); update the
renderedMessages entries to use the imported Copilot, SearchRelated and Section
directly (remove the SectionComp alias), and if you need runtime code-splitting
use next/dynamic to import SearchRelated/Copilot with ssr: false rather than
require() so bundlers and RSC static analysis work correctly.

In `@components/chat-panel.tsx`:
- Around line 30-31: Remove the redundant messages prop from the ChatPanel
component and read messages directly from the context returned by
useChatContext(); specifically, update the ChatPanel signature to drop the
messages parameter and any prop type that includes messages, then replace usages
of the prop with the messages value from const { append, setMessages, messages }
= useChatContext() (or similar) inside the ChatPanel body, and update any
callers to stop passing messages into ChatPanel so there is a single source of
truth.
- Around line 107-110: The handleClear function only calls setMessages([]) and
clearAttachment(), which leaves the current chatId in ChatProvider intact so
subsequent append calls continue writing to the same server-side chat record;
update handleClear to either (A) remount ChatProvider with a fresh chatId by
navigating to a new route (e.g., push('/') or generate a new /search/[newId] and
navigate) so a new server chat is created, or (B) call the server-side
clearChat/delete API for the active chatId before clearing local state; modify
the handleClear implementation (and any callers of setMessages/clearAttachment)
to perform one of these two flows and ensure chatId is reset or the server
record deleted to keep client and server histories in sync.

In `@components/chat-provider.tsx`:
- Around line 59-65: The current annotations const casts useChat().data to
Annotation[] unsafely and ignores that most annotations are emitted under the
route's "8:" message-annotation frames (so annotations will be empty); update
the logic used by annotations and getToolResults to (1) treat useChat().data as
JSONValue[] at runtime and filter with a type guard that verifies each entry is
an Annotation object (has a string "type" and other expected fields) before
casting, and (2) also extract annotations from message frames (entries shaped
like { type: 'message', message: { annotations: [...] } }) so
message.annotations are included; adjust getToolResults to operate on this
validated combined list (refer to useChat().data, the annotations const, and
getToolResults to locate where to change).
- Around line 77-97: The ChatContext.Provider's value object is recreated on
every render causing unnecessary re-renders; wrap the value passed to
<ChatContext.Provider> in a useMemo and list its dependencies (messages, input,
setInput, handleSubmit, append, setMessages, isLoading, chatId, annotations,
getToolResults, getInquiry, getRelatedQueries) so the object identity only
changes when those actual values change; update the component to import/use
React.useMemo and replace the inline value={{...}} with a memoized value
variable used in the Provider.
- Around line 13-26: The ChatContextValue.append signature is too narrow and
uses an any cast; update ChatContextValue to match the SDK signature used by
useChat().append by changing append to accept (message: Message | CreateMessage,
options?: ChatRequestOptions) => Promise<string | null | undefined> and remove
any as any casts where append is forwarded; ensure imports/types for
CreateMessage and ChatRequestOptions are pulled from the SDK and update all
usages that relied on the previous restrictive shape (e.g., callers sending
body: { action: 'resolution_search', fileData }) to compile with the new types.

In `@components/chat.tsx`:
- Around line 70-75: The effect calling router.refresh() fires on mount when the
last persisted message is from the assistant; change the condition to only
refresh when a streaming response just finished by tracking the previous
isLoading state (e.g., a useRef prevIsLoading) and require prevIsLoading ===
true && isLoading === false in the useEffect that inspects messages and
lastMsg.role === 'assistant' before calling router.refresh(); update the
useEffect dependency list accordingly and set prevIsLoading = isLoading at the
end of the effect so only transitions from loading→not-loading trigger the
refresh.

In `@components/collapsible-message.tsx`:
- Around line 28-33: The useEffect currently lists isCollapsed as a dependency
but always resets open to isLastMessage, which overrides user toggles when
isCollapsed changes; update the effect on the component that uses
isCollapsed/open/setOpen/useEffect/isLastMessage so it only responds to changes
in isLastMessage (remove isCollapsed from the dependency array) so open is only
programmatically reset when the message becomes the last message, preserving
user-controlled toggles when isCollapsed flips.

In `@components/copilot.tsx`:
- Around line 69-73: The current code appends the skip sentinel as a visible
user message (append({ role: 'user', content: '{"action": "skip"}' })), which
leaks into UI and persisted chat history; instead send the sentinel as metadata
using the body option (following the drawnFeatures pattern in
followup-panel.tsx) so it isn't rendered or stored as a user-visible
message—update the append call in components/copilot.tsx to append the real user
content (or updatedQuery()) and pass { body: { action: 'skip' } } (or similar)
to convey the skip intent, and ensure any server-side stream/route handler reads
the body metadata rather than relying on message content.

In `@components/followup-panel.tsx`:
- Around line 15-28: The handleSubmit function currently calls append(...)
without error handling and allows duplicate submissions; wrap the append call in
a try/catch/finally inside handleSubmit (use a local submitting state or the
context flag useChatContext().isLoading) to prevent concurrent submissions, set
a submitting boolean true before calling append and false in finally, show or
propagate an error (e.g., set an error state or call a provided notify) from the
catch, and only clear the input via setInput('') on success; also disable the
submit button when submitting or useChatContext().isLoading to guard against
duplicate sends.

In `@components/header-search-button.tsx`:
- Line 160: The button's disabled check uses map from useMap() (Mapbox) and thus
disables the control when mapProvider === 'google' even though
handleResolutionSearch supports Google via mapData.cameraState; update the
disabled expression to be provider-aware (e.g. keep isAnalyzing, then require
map only for Mapbox and require mapData?.cameraState for Google) and apply the
identical change to the mobile button; modify the components using map,
mapProvider, mapData and isAnalyzing so that disabled becomes something like:
disabled={isAnalyzing || (mapProvider === 'mapbox' && !map) || (mapProvider ===
'google' && !mapData?.cameraState)} ensuring you update both the desktop and
mobile button instances that reference disabled.
- Around line 117-128: Update blobToBase64 to handle errors by adding
reader.onerror to reject the Promise on failure and ensure it resolves only with
a non-null string; then avoid duplicate conversions by computing the base64 once
for the chosen blob and reusing it: compute a singleBase64 (using blobToBase64)
from (mapboxBlob || googleBlob) and assign that to fileData, and only compute
mapboxImageData or googleImageData separately if the other provider's blob is
present and different; adjust uses of fileData, mapboxImageData, and
googleImageData accordingly so a single blob isn't read twice.

In `@components/resolution-carousel.tsx`:
- Around line 40-44: The FileReader usage in resolution-carousel.tsx creates a
Promise that only resolves on reader.onloadend which can hang if readAsDataURL
fails; update the Promise in the code that creates fileData (the FileReader
block) to attach reader.onerror and reader.onabort handlers that reject the
Promise with the error/event, and ensure you call reader.onloadend only after
verifying reader.result is usable; also call reader.removeEventListener/clear
handlers or cancel the read on rejection to avoid leaks. By rejecting the
Promise on error/abort the outer await will throw and your existing try/finally
(and setIsAnalyzing(false)) will run as expected.

In `@components/search-related.tsx`:
- Around line 18-20: The related-query handler handleRelatedClick currently
calls append({ role: 'user', content: query }) which omits the same body context
(mapProvider, drawnFeatures) that ChatPanel.handleSubmit includes; update
handleRelatedClick to forward the same default body used by
ChatPanel/ChatProvider (include mapProvider and drawnFeatures along with chatId)
or move the default body construction into ChatProvider and call append with
that shared body so related queries receive the same tool/geospatial context;
reference functions/components: handleRelatedClick, ChatPanel.handleSubmit,
ChatProvider, and append, and ensure the appended payload includes mapProvider
and drawnFeatures fields.

In `@components/tool-result-renderer.tsx`:
- Around line 18-21: The code is doing an unnecessary JSON.stringify/parse
round-trip when rendering search results; update the components to accept the
typed object directly instead of a string: change SearchSection and
VideoSearchSection props from result?: string to result?: TypeSearchResults |
SerperSearchResults (or the appropriate concrete types), update their internal
usages to consume the object type (remove parsing), and then in
components/tool-result-renderer.tsx return <SearchSection result={result} /> and
<VideoSearchSection result={result} /> (also update any other call sites and
imports/types to match the new prop signatures).
- Around line 17-23: ToolResultRenderer must short-circuit error-shaped tool
outputs before dispatching to sections: detect the error shape from the incoming
result (either result?.error for objects or, for the search/video paths where
you stringify, parse the string safely and check for an "error" field) inside
the ToolResultRenderer switch/dispatch logic and return an error UI (or a simple
error message) instead of rendering SearchSection, VideoSearchSection, or
RetrieveSection; reference the toolName switch and the result variable and
handle both object and JSON-stringified error payloads so SearchSection,
VideoSearchSection, and RetrieveSection never receive the { error: string }
shape.

In `@components/video-search-section.tsx`:
- Around line 13-14: The component VideoSearchSection currently calls
JSON.parse(result) directly which can throw and crash the render; wrap the parse
in a try/catch inside VideoSearchSection (protecting the result value) so that
if parsing fails you set searchResults to undefined (or an explicit error flag)
and log the error, then render the existing skeleton/error UI path instead of
letting the exception bubble; update references to searchResults (and any
downstream rendering that expects SerperSearchResults) to handle the fallback
case safely.

In `@lib/actions/suggest.ts`:
- Around line 26-41: Replace the streaming usage with the final-value primitive:
stop calling streamObject and use generateObject to return a single
Promise<PartialRelated> (use generateObject with systemPrompt, messages [{ role:
'user', content: query }], and relatedSchema) and remove the for-await loop that
only kept the last partial; also ensure the model passed from getModel() is an
OpenAI model that supports structured outputs (e.g., gpt-4o) before calling
generateObject—validate or coerce the result of getModel() into a compatible
LanguageModel and throw/log a clear error if an incompatible model (like X.AI
grok) is returned.

In `@lib/agents/inquire.tsx`:
- Around line 8-37: The current implementation uses streamObject(...) and
iterates partialObjectStream to set finalInquiry, which only yields the terminal
object and hides partial state; replace streamObject with generateObject
(passing the same params: model (await getModel()) as LanguageModel, system
prompt, messages, schema: inquirySchema) and await its returned object directly,
then assign that result to finalInquiry and return it, removing the for-await
loop over partialObjectStream; ensure types for finalInquiry and the
generateObject call align with PartialInquiry.

In `@lib/agents/tools/geospatial.tsx`:
- Around line 275-283: The switch over queryType inside the IIFE can return
undefined for unknown query types; change the switch in the IIFE (the block that
currently calls prefer('directions_tool') / prefer('matrix_tool') / etc.) to
include a default case that throws a structured Error (or a custom Error
subclass) indicating the queryType is unsupported (include the queryType value
in the message), and then remove the downstream fallback that uses toolName ??
'unknown_tool' so we fail fast and surface a clear "unsupported queryType" error
instead of sending "unknown_tool" to the MCP server.

In `@lib/agents/writer.tsx`:
- Line 16: The default system prompt string containing the typo "There are also
some proconfigured example queires." should be updated: find the prompt literal
in lib/agents/writer.tsx that includes that exact phrase and replace
"proconfigured" with "preconfigured" and "queires" with "queries" so the prompt
reads "There are also some preconfigured example queries."

---

Outside diff comments:
In `@app/actions.tsx`:
- Around line 1-9: The file exports only the AIState type (referencing
AIMessage) and is unused and misnamed as a .tsx; either delete app/actions.tsx
or move the AIState type into the existing types module where AIMessage lives
(e.g., add export type AIState = { messages: AIMessage[]; chatId: string;
isSharePage?: boolean } to the types file and remove the redundant file), and
update any imports if you relocate the type.

In `@components/chat-panel.tsx`:
- Around line 86-105: handleSubmit currently clears selectedFile via
clearAttachment() and only appends the user text; modify handleSubmit to include
the attached file by base64-encoding selectedFile and passing it as fileData in
the append call body alongside mapProvider and drawnFeatures, and include
action: 'resolution_search' so the backend route (app/api/chat/stream) can
process the file; ensure you still clearAttachment() and setInput('') after the
append completes, and keep the submit enabled when only a file is attached by
using the existing selectedFile check.

In `@components/mobile-icons-bar.tsx`:
- Around line 42-44: handleNewChat currently only calls setMessages([]) which
leaves the user on the old /search/[oldId] so subsequent messages append to the
previous chat; update handleNewChat (remove the unnecessary async) to also
navigate away so a fresh chatId is created — either call router.push('/') after
clearing messages or generate a new id and call router.push(`/search/${newId}`)
(keep setMessages([]) as-is), ensuring you reference the handleNewChat function
and the Chat component's routing behavior when making the change.

In `@components/search-section.tsx`:
- Around line 14-39: Wrap the JSON.parse in SearchSection with a try/catch and
fall back to a safe undefined/parseError state instead of letting a thrown parse
error bubble out; after parsing, explicitly check for an error shape (e.g.,
searchResults?.error) and render an error branch (for example show a ToolBadge
or a Section with the error message) rather than attempting to use
searchResults.query, searchResults.images, or searchResults.results when they
may be undefined; update the JSX branches in SearchSection to use the parseError
/ error-object checks so SearchResultsImageSection and SearchResults only
receive valid arrays/strings.

In `@lib/agents/researcher.tsx`:
- Around line 134-143: The researcher function currently mutates the caller's
messages array by pushing the assistant turn and tool responses (see messages,
toolCalls, toolResponses, fullResponse), causing retries to compound context;
change researcher to avoid in-place mutation by building and returning the new
message segments (e.g., newSegments containing the assistant message and any
tool messages) instead of pushing into messages, update the return to include
those segments (alongside result, fullResponse, hasError, toolResponses), and
let the caller decide whether to append newSegments to messages (only after a
successful, non-empty fullResponse) or to reset messages before retries.

In `@lib/agents/tools/geospatial.tsx`:
- Around line 37-41: The console.log in the GeospatialTool environment check is
leaking secret prefixes by printing substrings of composioApiKey,
mapboxAccessToken, and composioUserId; change this to log only boolean presence
checks (e.g., !!composioApiKey, !!mapboxAccessToken, !!composioUserId) or
descriptive status strings like "present"/"missing" in the same console.log call
so no parts of secrets are emitted (locate the block referencing composioApiKey,
mapboxAccessToken, composioUserId and replace the substring masking with
presence indicators).
- Around line 50-57: Replace the dynamic require of the mapbox JSON and the
redundant try/catch with a static JSON import using the project's
resolveJsonModule support: remove the require('../../../mapbox_mcp_config.json')
call and the surrounding try/catch, statically import the JSON
(mapbox_mcp_config.json) at the top of the module, drop the unused local
variable config and/or mapboxMcpConfig if not used elsewhere, and use the
imported object directly where needed (referencing mapbox_mcp_config.json and
any code that previously referenced mapboxMcpConfig or config).

In `@lib/agents/tools/index.tsx`:
- Around line 12-24: The tools object in getTools is typed as any which loses
compile-time guarantees; replace tools: any with a stronger type such as the
SDK's ToolSet (from 'ai') or a union Record type using ReturnType for each
factory (e.g., Record<string, ReturnType<typeof searchTool> | ReturnType<typeof
retrieveTool> | ReturnType<typeof videoSearchTool> | ReturnType<typeof
geospatialTool>>) so streamText({ tools }) and other callers get proper type
checking; update the signature of getTools (and ToolProps if needed) to reflect
the chosen type.

In `@lib/agents/tools/retrieve.tsx`:
- Around line 9-35: The catch path is coercing the Error object into a string
and the fetch has no status or timeout handling; update the fetch call (where
you call fetch(`https://r.jina.ai/${url}`) and call response.json()) to use an
AbortController with a short timeout (e.g., 5s), check response.ok and return a
clear error when status is not ok (include response.status and statusText),
safely parse JSON with a try/catch and handle non-JSON bodies, and in the catch
block return the actual error.message (and error.cause if present) rather than
using `${error}` so the returned error includes the real message and URL.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 77b166e7-096f-4885-8ce2-664e41391f3c

📥 Commits

Reviewing files that changed from the base of the PR and between 99505ba and 40f56ad.

📒 Files selected for processing (29)
  • app/actions.tsx
  • app/api/chat/stream/route.ts
  • app/page.tsx
  • app/search/[id]/page.tsx
  • components/chat-messages.tsx
  • components/chat-panel.tsx
  • components/chat-provider.tsx
  • components/chat.tsx
  • components/collapsible-message.tsx
  • components/copilot.tsx
  • components/followup-panel.tsx
  • components/header-search-button.tsx
  • components/message.tsx
  • components/mobile-icons-bar.tsx
  • components/resolution-carousel.tsx
  • components/search-related.tsx
  • components/search-section.tsx
  • components/tool-result-renderer.tsx
  • components/video-search-section.tsx
  • lib/actions/suggest.ts
  • lib/agents/inquire.tsx
  • lib/agents/query-suggestor.tsx
  • lib/agents/researcher.tsx
  • lib/agents/tools/geospatial.tsx
  • lib/agents/tools/index.tsx
  • lib/agents/tools/retrieve.tsx
  • lib/agents/tools/search.tsx
  • lib/agents/tools/video-search.tsx
  • lib/agents/writer.tsx
💤 Files with no reviewable changes (1)
  • lib/agents/query-suggestor.tsx
📜 Review details
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The app/api/chats/route.ts endpoint correctly returns { chats: [] } for unauthenticated users instead of a 401 error. This is the intended behavior for better UX and should not be changed to return 401.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: Main branch has database schema with vector embeddings and RPC functions for RAG but never uses them. The PR branch activates this dormant infrastructure by actually calling these functions to implement context-aware responses.
📚 Learning: 2026-02-08T16:50:33.791Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.

Applied to files:

  • lib/agents/researcher.tsx
🔇 Additional comments (10)
components/message.tsx (1)

10-24: LGTM

Migration from StreamableValue<string> to plain string is clean; LaTeX preprocessing remains correct and the content || '' guard safely handles any accidental undefined from callers.

app/page.tsx (1)

1-17: LGTM — provider swap is clean and consistent with app/search/[id]/page.tsx.

lib/agents/tools/video-search.tsx (1)

3-28: LGTM — factory signature cleanup and error shape is consistent with other tools in this PR.

Returning a plain { error: string } object mirrors the pattern in lib/agents/tools/search.tsx and retrieve.tsx, keeping the tool result rendering surface uniform for components/tool-result-renderer.tsx.

lib/agents/tools/search.tsx (1)

5-55: LGTM — consistent with the tool refactor pattern across the PR.

Error shape ({ error: string }) and removal of ToolProps match video-search.tsx and retrieve.tsx, keeping components/tool-result-renderer.tsx happy.

components/resolution-carousel.tsx (1)

46-54: Payload size limits are already configured adequately; base64 overhead is mitigated by client-side compression.

The satellite image is base64-encoded and sent via request.json() in /api/chat/stream, adding ~33% size overhead. However, compressImage reduces dimensions to 1024×1024 and converts to JPEG at 0.8 quality before transmission, which should keep payloads well below the 200 MB bodySizeLimit configured for server actions in next.config.mjs. Verify that hosting platforms and any intermediate proxies align with this limit if deploying outside the default environment, but the current compression strategy is sound.

lib/agents/researcher.tsx (1)

13-75: System prompt updates look coherent; verify backtick escaping renders.

The ${'search'}, ${'retrieve'}, ${'geospatialQueryTool'} interpolations inside a String.raw template produce literal backticks in the rendered prompt. Confirm this is the intended surface shown to the model (vs. previous version that presumably used markdown fencing). Functionally neutral, just a readability/consistency note for the prompt content.

components/chat-provider.tsx (2)

30-34: useChatContext LGTM.

Clean throw-on-missing-provider pattern.


42-57: No action needed — The concern about toolInvocations in initialMessages does not apply to the current implementation.

The codebase only persists messages with role and content to the database (schema and mapping in search/[id]/page.tsx confirm this). toolInvocations are never stored or sent to initialMessages. Additionally, useChat is not configured with tools, and metadata like annotations (inquiry, related) flows separately through the stream protocol, not through message fields. If tool-calling and replay are added in the future, proper persistence and sendExtraMessageFields would then be relevant.

components/chat-messages.tsx (1)

78-85: No action required. The inquiry prop shape passed to <Copilot> is correct—CopilotProps expects inquiry: { value: PartialInquiry }, and the component receives exactly that at line 83.

app/api/chat/stream/route.ts (1)

1-11: 🧹 Nitpick | 🔵 Trivial

Unused imports.

streamText and LanguageModel (line 1) and getModel (line 8) are imported but never referenced in this module. Safe to drop.

⛔ Skipped due to learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.

Comment thread app/api/chat/stream/route.ts
? { lat: parseFloat(latitude), lng: parseFloat(longitude) }
: undefined

const currentSystemPrompt = (await getSystemPrompt(userId)) || ''
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

RAG context is still not being passed to the researcher.

currentSystemPrompt is assigned directly from getSystemPrompt(userId) and passed to researcher on line 135, with no call to the RAG augmentation (generate_embedding / hybrid_search from lib/actions/rag.ts). The retrieved-context infrastructure remains dormant in this endpoint, replicating the exact same bug previously identified in app/actions.tsx (where currentSystemPrompt was passed instead of augmentedSystemPrompt). If RAG was intended to be active post-migration, augment the prompt here before invoking researcher.

Based on learnings: "there's a bug on line 433 of app/actions.tsx where currentSystemPrompt is passed to the researcher instead of augmentedSystemPrompt, preventing the retrieved context from being used" — the same pattern is repeated here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` at line 41, currentSystemPrompt is passed
directly to researcher and never augmented with RAG context; call the RAG
augmentation flow (use the functions from lib/actions/rag.ts such as
generate_embedding and hybrid_search or your existing wrapper that returns
augmentedSystemPrompt) after fetching getSystemPrompt(userId) and before
invoking researcher, then pass augmentedSystemPrompt to researcher instead of
currentSystemPrompt (reference symbols: currentSystemPrompt, getSystemPrompt,
researcher, generate_embedding, hybrid_search, augmentedSystemPrompt).

Comment on lines +69 to +92
if (lastText === 'what is a planet computer?' || lastText === 'what is qcx-terra?') {
const definition = lastText === 'what is a planet computer?'
? `A planet computer is a proprietary environment aware system that interoperates weather forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`
: `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`;

const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
// Send text content
controller.enqueue(encoder.encode(`0:${JSON.stringify(definition)}\n`))
// Send finish message
controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
// Send data with metadata
controller.enqueue(encoder.encode(`2:[{"relatedQueries":{"items":[]},"type":"related"}]\n`))
controller.close()
}
})

saveChatAsync(chatId, userId, messages, definition)

return new Response(stream, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded replies: content drift vs. system prompt, and fragile string matching.

  • The Planet Computer definition returned here says "weather forecasting", while the system prompt source of truth in lib/agents/researcher.tsx (line 73) says "Climate forecasting". Pick one.
  • Matching on exact lowercased strings (including a trailing ?) means "What is a planet computer" (no question mark) or an extra trailing space falls through to the task manager and gets a different, model-generated answer — a visible inconsistency. Consider normalizing more aggressively (strip punctuation/whitespace) or move these responses into the system prompt only.
  • On empty messages, messages[messages.length - 1] is undefined — safe due to the optional chaining on line 68, but worth asserting at least one message before branching into the task-manager path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 69 - 92, The hardcoded reply
branch uses fragile exact-match logic and inconsistent wording; update it to
normalize the last user message (trim, collapse whitespace, remove trailing
punctuation and lowercase) before matching instead of comparing raw lastText,
and choose wording consistent with the system prompt (use "Climate forecasting"
everywhere) when building the definition variable; also guard the branch by
asserting messages.length > 0 (or early-return) before accessing
messages[messages.length - 1], and keep the call to saveChatAsync(chatId,
userId, messages, definition) after these checks; alternatively consider moving
these canned replies into the system prompt in lib/agents/researcher.tsx and
only use this branch as a fallback.

Comment thread app/api/chat/stream/route.ts Outdated
Comment thread app/api/chat/stream/route.ts
Comment thread components/video-search-section.tsx Outdated
Comment thread lib/actions/suggest.ts Outdated
Comment on lines +26 to +41
let finalSuggestions: PartialRelated = {}

const result = await streamObject({
model: (await getModel()) as LanguageModel,
system: systemPrompt,
messages: [{ role: 'user', content: query }],
schema: relatedSchema
})

for await (const obj of result.partialObjectStream) {
if (obj && typeof obj === 'object' && 'items' in obj) {
finalSuggestions = obj as PartialRelated
}
objectStream.done()
})()
}

return objectStream.value
return finalSuggestions
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

ai SDK 4.3 generateObject vs streamObject usage for final-value-only consumers

💡 Result:

For final-value-only consumers in Vercel AI SDK 4.3 (part of v4 series), use generateObject. It awaits completion and returns the full validated structured object via { object }, ideal when you only need the end result without streaming partial updates. streamObject is for streaming scenarios: it provides partialObjectStream for progressive updates, elementStream (array mode), and a final object promise, but requires handling the stream even if just awaiting the end. generateObject throws AI_NoObjectGeneratedError on failure; streamObject handles errors in-stream via onError. Note: These are deprecated in later versions (v5+) favoring generateText({ output: Output.object }) / streamText, but available in 4.3. No tool calling support with generateObject/streamObject; use text variants.

Citations:


🏁 Script executed:

# Find the getModel() function definition
fd -t f -e ts -e js | xargs rg -l "getModel" | head -20

Repository: QueueLab/QCX

Length of output: 147


🏁 Script executed:

# Search for getModel function definition
rg -n "^\s*(export\s+)?(async\s+)?function\s+getModel|^\s*const\s+getModel\s*=" --type ts --type js

Repository: QueueLab/QCX

Length of output: 143


🏁 Script executed:

# Check suggest.ts and related model configuration
fd -t f "suggest.ts" | xargs cat -n

Repository: QueueLab/QCX

Length of output: 2052


🏁 Script executed:

# Read the getModel function implementation
cat -n lib/utils/index.ts | head -60

Repository: QueueLab/QCX

Length of output: 2644


🏁 Script executed:

# Read more of the getModel function to see all model options
cat -n lib/utils/index.ts | sed -n '24,100p'

Repository: QueueLab/QCX

Length of output: 3331


🏁 Script executed:

# Check git history or PR context for suggest.ts
git log --oneline -n 20 -- lib/actions/suggest.ts 2>/dev/null || echo "Git history unavailable"

Repository: QueueLab/QCX

Length of output: 117


Use generateObject instead of streamObject—but require OpenAI gpt-4o for structured output compatibility.

The signature returns Promise<PartialRelated>, and the loop at lines 35–39 discards every partial except the last. generateObject is the correct primitive for final-value-only consumers and eliminates wasted parsing overhead.

Critical: X.AI's grok-4-fast-non-reasoning model does not support JSON Schema structured outputs and will reject this call. Since getModel() can return X.AI by default or user selection, you must either:

  1. Preferred: Update the getModel() call to enforce OpenAI: const model = (await getModel()) as LanguageModel; /* ensure gpt-4o or compatible */, or
  2. Explicitly pass OpenAI: replace getModel() with OpenAI gpt-4o initialization.

Ensure model selection supports structured outputs before deploying.

♻️ Proposed refactor
-import { CoreMessage, LanguageModel, streamObject } from 'ai'
+import { CoreMessage, LanguageModel, generateObject } from 'ai'
@@
-  let finalSuggestions: PartialRelated = {}
-
-  const result = await streamObject({
-    model: (await getModel()) as LanguageModel,
-    system: systemPrompt,
-    messages: [{ role: 'user', content: query }],
-    schema: relatedSchema
-  })
-
-  for await (const obj of result.partialObjectStream) {
-    if (obj && typeof obj === 'object' && 'items' in obj) {
-      finalSuggestions = obj as PartialRelated
-    }
-  }
-
-  return finalSuggestions
+  const { object } = await generateObject({
+    model: (await getModel()) as LanguageModel,
+    system: systemPrompt,
+    messages: [{ role: 'user', content: query }],
+    schema: relatedSchema
+  })
+
+  return object
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/actions/suggest.ts` around lines 26 - 41, Replace the streaming usage
with the final-value primitive: stop calling streamObject and use generateObject
to return a single Promise<PartialRelated> (use generateObject with
systemPrompt, messages [{ role: 'user', content: query }], and relatedSchema)
and remove the for-await loop that only kept the last partial; also ensure the
model passed from getModel() is an OpenAI model that supports structured outputs
(e.g., gpt-4o) before calling generateObject—validate or coerce the result of
getModel() into a compatible LanguageModel and throw/log a clear error if an
incompatible model (like X.AI grok) is returned.

Comment thread lib/agents/inquire.tsx Outdated
Comment thread lib/agents/tools/geospatial.tsx
Comment thread lib/agents/writer.tsx Outdated
@ngoiyaeric
Copy link
Copy Markdown
Collaborator Author

@coderabbitai there are no tokens being generated and not multi modal features

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== Searching for token-related code ==="
rg -n "token" --type=ts --type=tsx -C2

echo "=== Searching for multimodal/image-related code in new files ==="
rg -n "multimodal\|image\|base64\|vision" --type=ts -C2

echo "=== Checking chat-provider.tsx ==="
fd "chat-provider.tsx" --exec cat {}

echo "=== Checking stream route ==="
fd "route.ts" --full-path --glob "*/chat/stream/route.ts" --exec cat {}

echo "=== Checking header-search-button for image/base64 usage ==="
fd "header-search-button.tsx" --exec cat {}

echo "=== Checking resolution-carousel for image/base64 ==="
fd "resolution-carousel.tsx" --exec cat {}

Length of output: 14333


@ngoiyaeric, understood — thanks for the clarification! To summarize what I'm seeing in the code:

  • No token generation/counting: The new ChatProvider and stream route don't implement any token usage tracking or generation metrics — that concern doesn't apply here.
  • No multimodal LLM features: While header-search-button.tsx and resolution-carousel.tsx do convert map captures to base64 and pass them as structured body fields to the resolution_search handler, these are not fed into a vision/multimodal model as message content parts — the image data is handled server-side by resolutionSearch, not passed to a multimodal LLM directly. The useChat setup uses a plain JSON body, not multipart message content.

I've noted this for future reviews. I'll avoid flagging token tracking or multimodal message handling concerns for this codebase since those features are intentionally out of scope.


✏️ Learnings added
Learnt from: ngoiyaeric
URL: https://github.com/QueueLab/QCX/pull/579

Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

@ngoiyaeric
Copy link
Copy Markdown
Collaborator Author

@coderabbitai # Vercel Runtime Log

Request

ID: rwcvl-1777104552049-1cd52d48d85c
Time: 2026-04-25T08:09:12.049Z
POST /api/chat/stream → 401
Host: qcx-kh8cgzmvp-qcx.vercel.app
Duration: 512ms
Cache: MISS
Region: cpt1
User Agent: Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/147.0.0.0 Mobile Safari/537.36
Referer: https://qcx-kh8cgzmvp-qcx.vercel.app/search/20edefb2-4a6b-4cad-976a-59f5cab7eb82

Lifecycle

Middleware

Status: 200
Route: middleware
Duration: 9ms
Runtime: edge
Memory: 266MB / 2048MB
Region: iad1

Function

Status: 401
Duration: 15ms
Runtime: nodejs22.x
Memory: 312MB / 2048MB
Region: iad1

Deployment

ID: dpl_EDRgZ3E2Ks5RXoET5CPcPaaJDwaw
Environment: preview
Branch: feat/ai-sdk-ui-migration

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

Let me inspect the new stream route and any auth utilities it references to diagnose this 401.


🏁 Script executed:

#!/bin/bash
echo "=== app/api/chat/stream/route.ts ==="
fd "route.ts" --full-path --glob "*/chat/stream/route.ts" --exec cat {}

echo ""
echo "=== Searching for auth/session checks in the route ==="
rg -n "auth\|session\|getServerSession\|currentUser\|unauthorized\|401\|NextAuth\|clerk\|supabase" --type=ts -C3

echo ""
echo "=== Check middleware.ts for auth config ==="
fd "middleware.ts" --exec cat {}

Length of output: 1051


🏁 Script executed:

#!/bin/bash
echo "=== Finding all route.ts files ==="
fd "route.ts" --type f

echo ""
echo "=== Reading the chat stream route ==="
fd "route.ts" --type f | xargs grep -l "stream\|chat" | xargs cat

echo ""
echo "=== Searching for 401 / unauthorized responses in all TS/TSX files ==="
rg -rn "401\|unauthorized\|Unauthorized" --type=ts --type=tsx -C3

echo ""
echo "=== Searching for auth patterns in all files ==="
rg -rn "auth()\|getServerSession\|currentUser\|session" -C3

Length of output: 17912


🏁 Script executed:

#!/bin/bash
echo "=== lib/auth/get-current-user.ts ==="
fd "get-current-user" --exec cat {}

echo ""
echo "=== Auth lib directory ==="
fd . --type f --full-path | grep -i "auth"

echo ""
echo "=== Check auth config / next-auth setup ==="
fd "auth" --type f | xargs cat 2>/dev/null

Length of output: 4876


@ngoiyaeric, here's the root cause analysis of the POST /api/chat/stream → 401:


Root Cause

The route handler calls getCurrentUserIdOnServer() as its first action, and returns a 401 immediately if it gets back null. The 15ms function duration confirms the request never reaches any AI agent code — it short-circuits at the Supabase session check.

Three likely causes, in order of probability:


1. 🔴 Supabase env vars not set in the preview environment

lib/auth/get-current-user.ts explicitly returns null when env vars are missing:

if (!supabaseUrl || !supabaseAnonKey) {
  console.error('[Auth] Supabase URL or Anon Key is not set for server-side auth.');
  return { user: null, session: null, error: ... };
}

Fix: In Vercel → Project Settings → Environment Variables, ensure NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_ANON_KEY are enabled for Preview environments (not just Production), and that they cover the feat/ai-sdk-ui-migration branch.


2. 🟡 AUTH_DISABLED_FOR_DEV doesn't work on Vercel preview

The bypass flag has a hard guard:

const AUTH_DISABLED_FLAG =
  process.env.AUTH_DISABLED_FOR_DEV === 'true' &&
  process.env.NODE_ENV !== 'production';   // ← Vercel sets NODE_ENV=production on ALL deployments, including preview

So even if AUTH_DISABLED_FOR_DEV=true is set in your Vercel preview env, this flag will always evaluate to false there. The mock user path is unreachable on any Vercel deployment.

Fix (if you want dev bypass on preview): Change the guard to use a separate flag, e.g.:

const AUTH_DISABLED_FLAG =
  process.env.AUTH_DISABLED_FOR_DEV === 'true' &&
  process.env.VERCEL_ENV !== 'production'; // 'preview' | 'development' | 'production'

3. 🟡 No active Supabase session for this user on the preview URL

The request came from an Android mobile browser to qcx-kh8cgzmvp-qcx.vercel.app — a different origin from your main deployment. If the user was signed in on a different URL (e.g., qcx.vercel.app), their Supabase session cookie won't be present on the preview subdomain.

Fix: Sign into the Supabase-backed auth flow on the preview URL before hitting /api/chat/stream, or verify that the Supabase project's "Site URL" and "Redirect URLs" in the Supabase dashboard include the preview URL.


Recommended immediate check

In your Vercel function logs for that deployment, look for:

  • [Auth] Supabase URL or Anon Key is not set → env vars missing (cause 1)
  • Silence (no log) → env vars present but session is null (cause 3)

🧠 Learnings used
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.

Stream protocol: add x-vercel-ai-data-stream header, use 2: data frames
instead of 8: annotation frames, add d: finish-message after every e:
finish-step, add Content-Type on error responses.

Security: stop logging API key prefixes in geospatial.tsx, add default
case to queryType switch.

Reliability: bound researcher retry loop (max 3 attempts), return
newSegments instead of mutating caller messages, add AbortController
timeout and response.ok check in retrieve.tsx, wrap JSON.parse in
try/catch in search-section/video-search-section/tool-result-renderer.

Components: memoize ChatProvider context value, type-safe annotation
filtering with type guard, fix imports (ChatRequestOptions from ai not
ai/react), stable keys in chat-messages, fix isLastMessage tracking,
FileReader onerror/onabort handlers in resolution-carousel and
header-search-button, provider-aware disabled check, router.push for
new chat navigation, skip sentinel via body metadata in copilot,
loading guard and error handling in followup-panel, mapProvider in
search-related body, remove isCollapsed from collapsible-message deps,
router.refresh only on loading transition in chat.tsx.

Data: strip base64 images before DB persistence in saveChatAsync,
preserve message IDs for stable deduplication, filter to valid roles
in initialMessages rehydration, use generateObject instead of
streamObject in suggest.ts and inquire.tsx, send file attachments
as base64 in chat-panel, delete unused app/actions.tsx.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@ngoiyaeric
Copy link
Copy Markdown
Collaborator Author

@coderabbitai there are no tokens generating

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== app/api/chat/stream/route.ts ==="
fd "route.ts" --type f | xargs grep -l "stream\|chat" | xargs cat

echo ""
echo "=== lib/agents/writer.tsx ==="
fd "writer.tsx" --exec cat {}

echo ""
echo "=== lib/agents/researcher.tsx ==="
fd "researcher.tsx" --exec cat {}

echo ""
echo "=== lib/agents/task-manager.tsx ==="
fd "task-manager.tsx" --exec cat {}

echo ""
echo "=== components/chat-provider.tsx ==="
fd "chat-provider.tsx" --exec cat {}

echo ""
echo "=== Checking for toDataStreamResponse / createDataStream / streamText usage ==="
rg -n "toDataStreamResponse\|createDataStream\|pipeDataStreamToResponse\|DataStreamWriter\|writeData\|streamText" --type=ts --type=tsx -C2

Length of output: 33137


@ngoiyaeric, here are the two most likely root causes for no tokens being generated, based on the route code:


🔴 Root Cause 1 — Model API key not set in the preview environment

getModel() is called in both researcher and writer. If the relevant API key (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY) is missing in the Vercel Preview environment, streamText will throw on the first researcher attempt. That sets hasError = true, the while loop exits immediately, answer stays '', and the if (answer) guard means no 0: text frame is ever sent to the client.

Fix: In Vercel → Project Settings → Environment Variables, make sure all model API keys are scoped to Preview (not just Production).


🟡 Root Cause 2 — Writer fallback is gated behind USE_SPECIFIC_API_FOR_WRITER=true

In the proceed path (app/api/chat/stream/route.ts):

if (useSpecificAPI && answer.length === 0) {
  answer = await writer(currentSystemPrompt, latestMessages)
}

useSpecificAPI is process.env.USE_SPECIFIC_API_FOR_WRITER === 'true'. If that env var is not set (which is the default), the writer never runs, even when the researcher produced only tool calls and no text. Three researcher attempts could all return fullResponse = '' (LLM chose tool calls only), and no text response would ever be generated.

Fix: Either set USE_SPECIFIC_API_FOR_WRITER=true in your Vercel preview env, or change the writer fallback to always run when answer is empty:

// Always fall back to writer if researcher produced no text
if (answer.length === 0) {
  const latestMessages = messages.slice(maxMsgs * -1)
  answer = await writer(currentSystemPrompt, latestMessages)
}

Recommended immediate check

Add a temporary log at the top of the POST handler (after auth passes) to confirm which path fails:

console.log('[stream] model keys present:', {
  openai: !!process.env.OPENAI_API_KEY,
  anthropic: !!process.env.ANTHROPIC_API_KEY,
})

If both are false in the Vercel function logs → Cause 1. If both are true but answer is still empty → Cause 2.


🧠 Learnings used
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.

Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The user confirmed that app/auth/page.tsx, lib/auth/v0/auth-service.ts, lib/supabase/browser-client.ts, and middleware.ts all exist and are fully implemented with Supabase SSR authentication. These files are production-ready and should not be replaced or considered legacy/stubs.

Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.

Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The app/api/chats/route.ts endpoint correctly returns { chats: [] } for unauthenticated users instead of a 401 error. This is the intended behavior for better UX and should not be changed to return 401.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/agents/tools/geospatial.tsx (1)

270-300: ⚠️ Potential issue | 🟡 Minor

Residual 'unknown_tool' fallback when prefer() returns undefined.

The new default branch handles unsupported queryType, but for 'directions', 'map', 'reverse', and 'geocode' the IIFE still returns prefer(...) with no string fallback (only 'search' has || 'poi_search_tool'). If the MCP server doesn't expose directions_tool/static_map_image_tool/reverse_geocode_tool/forward_geocode_tool under those exact names, toolName is undefined and Line 300 sends name: 'unknown_tool' — producing the same confusing "tool not found" error the prior review aimed to eliminate. Prefer failing fast with a structured message and dropping the ?? 'unknown_tool' fallback.

🛡️ Suggested fix
       const toolName = await (async () => {
         const { tools } = await mcpClient.listTools().catch(() => ({ tools: [] }));
         const names = new Set(tools?.map((t: any) => t.name) || []);
         const prefer = (...cands: string[]) => cands.find(n => names.has(n));

+        let resolved: string | undefined
         switch (queryType) {
-          case 'directions': return prefer('directions_tool')
-          case 'distance': return prefer('matrix_tool');
-          case 'search': return prefer( 'isochrone_tool','category_search_tool') || 'poi_search_tool';
-          case 'map': return prefer('static_map_image_tool')
-          case 'reverse': return prefer('reverse_geocode_tool');
-          case 'geocode': return prefer('forward_geocode_tool');
+          case 'directions': resolved = prefer('directions_tool'); break;
+          case 'distance':   resolved = prefer('matrix_tool'); break;
+          case 'search':     resolved = prefer('isochrone_tool', 'category_search_tool', 'poi_search_tool'); break;
+          case 'map':        resolved = prefer('static_map_image_tool'); break;
+          case 'reverse':    resolved = prefer('reverse_geocode_tool'); break;
+          case 'geocode':    resolved = prefer('forward_geocode_tool'); break;
           default: throw new Error(`Unsupported queryType: ${queryType}`);
         }
+        if (!resolved) {
+          throw new Error(`No MCP tool available on server for queryType "${queryType}"`);
+        }
+        return resolved
       })();
-            mcpClient.callTool({ name: toolName ?? 'unknown_tool', arguments: toolArgs }),
+            mcpClient.callTool({ name: toolName, arguments: toolArgs }),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/tools/geospatial.tsx` around lines 270 - 300, The code currently
falls back to name: 'unknown_tool' when toolName is undefined, masking missing
MCP tool mappings; update the toolName resolution (the IIFE that calls
prefer(...)) to detect when prefer(...) returns undefined and immediately throw
a clear, structured Error (including queryType and the attempted tool
identifiers) instead of returning undefined, and remove the "?? 'unknown_tool'"
fallback from the mcpClient.callTool call so mcpClient.callTool({ name:
toolName, arguments: toolArgs }) is only invoked with a validated toolName;
reference prefer(), toolName, and mcpClient.callTool to locate the changes.
♻️ Duplicate comments (3)
components/search-related.tsx (1)

20-29: ⚠️ Potential issue | 🟡 Minor

Related-query body still missing drawnFeatures (and other geospatial context).

Only mapProvider was added; the route handler also reads drawnFeatures, timezone, and latitude/longitude from the request body (app/api/chat/stream/route.ts:34-49). Related-query follow-ups will run without the user-drawn map context that typed messages from ChatPanel.handleSubmit carry, producing degraded geospatial routing.

♻️ Proposed fix
-import { useSettingsStore } from '@/lib/store/settings'
+import { useSettingsStore } from '@/lib/store/settings'
+import { useMapData } from '@/components/map/map-data-context'
@@
   const { append } = useChatContext()
   const { mapProvider } = useSettingsStore()
+  const { mapData } = useMapData()
@@
   const handleRelatedClick = async (query: string) => {
     await append(
       { role: 'user', content: query },
       {
         body: {
           mapProvider,
+          drawnFeatures: mapData.drawnFeatures || [],
+          timezone: mapData.currentTimezone || 'UTC',
         }
       }
     )
   }

Alternative: centralize the default body in ChatProvider so every append() inherits the same geospatial context.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/search-related.tsx` around lines 20 - 29, The related-query
handler handleRelatedClick currently only sends mapProvider in the append()
body; include the same geospatial context fields that the route expects
(drawnFeatures, timezone, latitude, longitude) so follow-ups carry user-drawn
map context—either add those properties to the body passed in handleRelatedClick
or centralize a default request body in ChatProvider so all append() calls
(including ChatPanel.handleSubmit and handleRelatedClick) inherit
drawnFeatures/timezone/latitude/longitude along with mapProvider.
components/header-search-button.tsx (1)

128-130: ⚠️ Potential issue | 🟡 Minor

Duplicate base64 conversion still present.

When a single provider is active, fileData reads the chosen blob, and then mapboxImageData/googleImageData reads the same blob again — doubling the FileReader work and roughly doubling the in-memory base64 strings before serialization. Reuse the already-computed value:

♻️ Proposed fix
-      const fileData = await blobToBase64((mapboxBlob || googleBlob)!)
-      const mapboxImageData = mapboxBlob ? await blobToBase64(mapboxBlob) : undefined
-      const googleImageData = googleBlob ? await blobToBase64(googleBlob) : undefined
+      const mapboxImageData = mapboxBlob ? await blobToBase64(mapboxBlob) : undefined
+      const googleImageData = googleBlob ? await blobToBase64(googleBlob) : undefined
+      const fileData = (mapboxImageData ?? googleImageData)!
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/header-search-button.tsx` around lines 128 - 130, The code
converts the same blob to base64 multiple times; change the order to compute
mapboxImageData = mapboxBlob ? await blobToBase64(mapboxBlob) : undefined and
googleImageData = googleBlob ? await blobToBase64(googleBlob) : undefined once,
then set fileData = mapboxImageData ?? googleImageData (or otherwise reuse the
already-computed base64) so that blobToBase64 is not called twice for the same
blob; update references to fileData, mapboxImageData, googleImageData and keep
using blobToBase64, mapboxBlob and googleBlob identifiers to locate the change.
components/followup-panel.tsx (1)

17-37: ⚠️ Potential issue | 🟡 Minor

Input is cleared before the awaited append, so failures silently drop the user's text.

setInput('') runs on Line 22 before await append(...). If append rejects (network error, 5xx, aborted stream), the only feedback is a console.error and the input field is already empty — the user must retype. Restore the input on failure (or clear only after success), and consider surfacing the error in the UI.

♻️ Suggested fix
   const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => {
     event.preventDefault()
     if (!input.trim() || isLoading) return

     const content = input.trim()
-    setInput('')
-
     try {
       await append(
         { role: 'user', content },
         {
           body: {
             mapProvider,
             drawnFeatures: mapData.drawnFeatures || [],
           }
         }
       )
+      setInput('')
     } catch (error) {
       console.error('Failed to send follow-up:', error)
+      // keep `input` populated so the user can retry
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/followup-panel.tsx` around lines 17 - 37, The handler clears the
input before awaiting append, causing user text to be lost on failure; in
handleSubmit, keep the trimmed content in a local variable (content) and either
move setInput('') to after the awaited append succeeds or restore the input in
the catch by calling setInput(content); also replace the console.error with a
UI-facing error state (e.g., setError) or show a toast so users see
failures—update references to handleSubmit, append, setInput, and
mapData.drawnFeatures accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/api/chat/stream/route.ts`:
- Around line 320-374: The synthetic rows pushed into aiMessages (see the
toolOutputs loop, the block adding answer, relatedQueries, and the final 'end'
marker) use fresh nanoid() each save, causing duplicate rows; change those id
generations to deterministic IDs derived from the chat context (e.g., use chatId
+ turnIndex + role/type like `${chatId}:${turnIndex}:response`) or only persist
the current turn's delta instead of re-writing all messages so
onConflictDoUpdate(id) will match existing rows; update the id assignment for
the tool output block (tool.toolName), the assistant answer block, the
related-queries block, and the end marker to use that deterministic scheme.
- Around line 75-98: The hardcoded reply branch using lastText exact matches
(variable lastText) is fragile and bypasses taskManager/RAG/tool persistence
while saving only the raw definition via saveChatAsync; fix by normalizing the
input (e.g., strip punctuation and whitespace and lowercase lastText) or better
yet move these canned definitions into the system prompt/source-of-truth, and
ensure the branch funnels responses through the same pipeline as other chats:
invoke taskManager or the normal response composer so RAG augmentation and tool
outputs are applied, and when persisting use the same save path (e.g.,
saveChatAsync) but include the full response metadata (related queries/tool
outputs) rather than only the raw definition so rehydrated history contains
relatedQueries and usage fields; locate changes around lastMsg/lastText
handling, the conditional block that constructs definition and the call to
saveChatAsync to implement this.
- Around line 217-298: The stream's start callback in handleResolutionSearch can
throw without reaching the outer catch and leaves useChat stuck; wrap the entire
start(controller) body in a try/catch and on error call controller.error(err) or
enqueue synthetic finish frames (e: and d: with finishReason:"error") before
closing so the client receives a terminal event; also stop doing fire-and-forget
saveChatAsync(...) — schedule it via after(() => saveChatAsync(chatId, userId,
messages, analysisResult.summary || '')) (or the equivalent after wrapper used
elsewhere) so the save runs safely in the serverless background.
- Around line 53-64: The current in-place splice and aggressive filter/map
remove tool turns and mutate messages used later for title heuristics: instead
of mutating messages with messages.splice(...) create a trimmed copy (e.g.,
trimmedMessages = messages.slice(-maxMsgs)) so the original array (and original
first user message) remains intact; when building messages preserve tool turns
and toolInvocations by including role === 'tool' and copying m.toolInvocations
in the map that produces CoreMessage (rather than dropping those fields);
finally pass the preserved originalFirstUserMessage (capture messages[0] before
trimming) into saveChatAsync for title computation so the title heuristic uses
the original user prompt.
- Around line 168-176: relatedQueries is left as {} when errorOccurred and then
emitted unconditionally, wasting bandwidth; change logic so relatedQueries is
only set and streamed when querySuggestor returned non-empty results.
Specifically: keep using the sanitizedMessages -> querySuggestor(...) call but
set relatedQueries to undefined/null by default, and when emitting the
annotation that currently sends { type: 'related', relatedQueries } only do so
if !errorOccurred && relatedQueries?.items?.length > 0; update the emission code
that writes the { type: 'related', relatedQueries } annotation to be conditional
on that check.

In `@components/chat-messages.tsx`:
- Line 18: The variable isLoading is currently unused after destructuring from
useChatContext(); either remove it from the destructuring to eliminate the
unused variable, or thread it into the message rendering so loading state can be
shown — e.g., pass isLoading as a prop into CollapsibleMessage (or the assistant
skeleton component used to render assistant messages) and use it to show the
streaming/loading indicator there; update the useChatContext() destructure in
components/chat-messages.tsx and the CollapsibleMessage/assistant component
props and handling accordingly.
- Around line 31-40: Tool result annotations are being rendered globally from
annotations (useChat().data) causing old tool outputs to accumulate; either have
the backend include a per-message identifier (e.g., messageId or turnIndex on
tool_result) and then filter annotations by that id before pushing to
renderedMessages, or stop rendering all tool_result entries up-front and instead
render tool outputs inside the per-message loop (use message.toolInvocations or
the current message object) so you only render tool results that belong to the
current assistant message; update the rendering logic around the annotations
variable and the ToolResultRenderer invocations (and adjust where
renderedMessages is populated) to use the per-message identifier or
message.toolInvocations to scope tool results.

In `@components/chat-panel.tsx`:
- Around line 99-128: The FileReader promise in handleSubmit (creation of
FileReader, reader.onloadend/reader.onerror and awaiting new Promise) must be
wrapped in try/catch so reader.onerror rejections are handled; on error,
log/report a user-visible error (e.g., set an error state or show a toast),
avoid calling setInput('')/clearAttachment() or restore them if already cleared,
and return early so append(...) is not called; update the block around
FileReader/reader.onerror and the subsequent setInput/clearAttachment calls to
only run after a successful read and ensure append(...) is only invoked when
fileData is available or when no file, keeping existing append(...) usage
intact.

In `@components/chat-provider.tsx`:
- Line 22: Update the handleSubmit signature in the chat provider interface to
use the proper ChatRequestOptions type instead of any: change handleSubmit: (e?:
React.FormEvent<HTMLFormElement>, options?: any) => void to accept options?:
ChatRequestOptions; ensure you import or reference ChatRequestOptions from the
SDK/module where other request types come from (matching existing imports),
update any related usages/calls to satisfy the stronger type, and run typechecks
to fix any mismatches.
- Around line 47-67: The ChatProvider currently ignores useChat's error and
control callbacks so 401s and stream failures are swallowed; update ChatProvider
to extract error, reload and stop (and optionally onError/onResponse handlers)
from useChat and include them in the memoized context value and ChatContextValue
type so consumers (e.g., ChatMessages/ChatPanel) can show a toast/banner on
error and provide retry/stop controls; ensure the memoized value returned by
ChatProvider contains error, reload, and stop alongside
messages/input/append/handleSubmit/isLoading/data.

In `@components/video-search-section.tsx`:
- Line 28: The render currently accesses searchResults.searchParameters.q
directly which can throw if searchParameters is missing; update the usage inside
ToolBadge (component: ToolBadge) to use optional chaining and a safe fallback
(e.g., searchResults?.searchParameters?.q ?? '' or a placeholder like '—') so
rendering remains robust against partial/malformed payloads while keeping the
rest of the JSX unchanged.

In `@lib/agents/researcher.tsx`:
- Around line 134-145: newSegments can include tool-calls without matching
tool-results when a stream errors after a 'tool-call' but before its
'tool-result', and also can create an assistant message with empty text when
fullResponse is empty; fix by filtering toolCalls to only include those that
have a corresponding toolResult (match by the ToolCallPart/ToolResultPart
identifier — verify and use toolCallId if present, otherwise toolName) before
building newSegments, and skip adding an assistant text part when fullResponse
is empty; update the logic around the variables newSegments, toolCalls,
toolResponses, ToolCallPart, ToolResultPart, fullResponse, and the errorOccurred
handling so only paired tool-call/tool-result entries are appended to message
history.

In `@lib/agents/writer.tsx`:
- Around line 22-35: Replace the nonexperimental_streamText usage and manual
for-await reassembly with generateText: call generateText({ model: await
getModel() as LanguageModel, maxTokens: 2500, system: systemToUse, messages })
await its result and return the returned full text directly instead of looping
over result.textStream and concatenating into fullResponse; remove the
fullResponse variable and the for-await loop around nonexperimental_streamText.

---

Outside diff comments:
In `@lib/agents/tools/geospatial.tsx`:
- Around line 270-300: The code currently falls back to name: 'unknown_tool'
when toolName is undefined, masking missing MCP tool mappings; update the
toolName resolution (the IIFE that calls prefer(...)) to detect when prefer(...)
returns undefined and immediately throw a clear, structured Error (including
queryType and the attempted tool identifiers) instead of returning undefined,
and remove the "?? 'unknown_tool'" fallback from the mcpClient.callTool call so
mcpClient.callTool({ name: toolName, arguments: toolArgs }) is only invoked with
a validated toolName; reference prefer(), toolName, and mcpClient.callTool to
locate the changes.

---

Duplicate comments:
In `@components/followup-panel.tsx`:
- Around line 17-37: The handler clears the input before awaiting append,
causing user text to be lost on failure; in handleSubmit, keep the trimmed
content in a local variable (content) and either move setInput('') to after the
awaited append succeeds or restore the input in the catch by calling
setInput(content); also replace the console.error with a UI-facing error state
(e.g., setError) or show a toast so users see failures—update references to
handleSubmit, append, setInput, and mapData.drawnFeatures accordingly.

In `@components/header-search-button.tsx`:
- Around line 128-130: The code converts the same blob to base64 multiple times;
change the order to compute mapboxImageData = mapboxBlob ? await
blobToBase64(mapboxBlob) : undefined and googleImageData = googleBlob ? await
blobToBase64(googleBlob) : undefined once, then set fileData = mapboxImageData
?? googleImageData (or otherwise reuse the already-computed base64) so that
blobToBase64 is not called twice for the same blob; update references to
fileData, mapboxImageData, googleImageData and keep using blobToBase64,
mapboxBlob and googleBlob identifiers to locate the change.

In `@components/search-related.tsx`:
- Around line 20-29: The related-query handler handleRelatedClick currently only
sends mapProvider in the append() body; include the same geospatial context
fields that the route expects (drawnFeatures, timezone, latitude, longitude) so
follow-ups carry user-drawn map context—either add those properties to the body
passed in handleRelatedClick or centralize a default request body in
ChatProvider so all append() calls (including ChatPanel.handleSubmit and
handleRelatedClick) inherit drawnFeatures/timezone/latitude/longitude along with
mapProvider.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: cb5fc12d-93ec-4e6d-b2e3-a017d730650a

📥 Commits

Reviewing files that changed from the base of the PR and between 40f56ad and e3ebcbf.

📒 Files selected for processing (23)
  • app/actions.tsx
  • app/api/chat/stream/route.ts
  • app/search/[id]/page.tsx
  • components/chat-messages.tsx
  • components/chat-panel.tsx
  • components/chat-provider.tsx
  • components/chat.tsx
  • components/collapsible-message.tsx
  • components/copilot.tsx
  • components/followup-panel.tsx
  • components/header-search-button.tsx
  • components/mobile-icons-bar.tsx
  • components/resolution-carousel.tsx
  • components/search-related.tsx
  • components/search-section.tsx
  • components/tool-result-renderer.tsx
  • components/video-search-section.tsx
  • lib/actions/suggest.ts
  • lib/agents/inquire.tsx
  • lib/agents/researcher.tsx
  • lib/agents/tools/geospatial.tsx
  • lib/agents/tools/retrieve.tsx
  • lib/agents/writer.tsx
💤 Files with no reviewable changes (1)
  • app/actions.tsx
📜 Review details
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
📚 Learning: 2026-02-08T16:50:33.791Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.

Applied to files:

  • lib/agents/inquire.tsx
  • lib/agents/writer.tsx
  • components/search-related.tsx
  • components/tool-result-renderer.tsx
  • components/followup-panel.tsx
  • lib/actions/suggest.ts
  • components/copilot.tsx
  • app/search/[id]/page.tsx
  • lib/agents/researcher.tsx
  • components/chat-messages.tsx
  • components/chat-panel.tsx
  • app/api/chat/stream/route.ts
📚 Learning: 2026-04-25T08:10:37.659Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.

Applied to files:

  • components/resolution-carousel.tsx
  • components/search-related.tsx
  • components/followup-panel.tsx
  • components/mobile-icons-bar.tsx
  • lib/actions/suggest.ts
  • components/chat-provider.tsx
  • components/header-search-button.tsx
  • components/copilot.tsx
  • app/search/[id]/page.tsx
  • components/chat-messages.tsx
  • components/chat-panel.tsx
  • lib/agents/tools/geospatial.tsx
  • app/api/chat/stream/route.ts
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.

Applied to files:

  • components/tool-result-renderer.tsx
  • lib/actions/suggest.ts
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.

Applied to files:

  • lib/actions/suggest.ts
📚 Learning: 2026-01-16T10:02:40.220Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.

Applied to files:

  • components/chat-messages.tsx
📚 Learning: 2026-02-10T08:45:03.719Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The app/api/chats/route.ts endpoint correctly returns { chats: [] } for unauthenticated users instead of a 401 error. This is the intended behavior for better UX and should not be changed to return 401.

Applied to files:

  • components/chat-panel.tsx
  • app/api/chat/stream/route.ts
🔇 Additional comments (18)
lib/agents/inquire.tsx (1)

1-30: LGTM — generateObject migration is clean.

The switch to generateObject matches the prior recommendation and the call site in app/api/chat/stream/route.ts:111-128 correctly consumes the returned object. Schema-based enforcement makes the embedded JSON shape in the system prompt informational rather than load-bearing — fine to keep for clarity.

lib/actions/suggest.ts (1)

1-34: LGTM — generateObject conversion looks correct.

getSuggestions cleanly returns the validated object and the call site in components/chat-panel.tsx:145-150 matches the new Promise<PartialRelated> signature. The X.AI structured-output compatibility concern previously raised still applies to getModel() selection but is not a new issue.

components/mobile-icons-bar.tsx (1)

31-47: LGTM — clean migration to chat context.

setMessages([]) followed by router.push('/') correctly replaces the removed useActions().clearChat() flow and matches the pattern used elsewhere in the PR.

components/resolution-carousel.tsx (1)

40-56: LGTM — FileReader error handling addressed.

onerror/onabort rejecters resolve the prior hang-on-failure concern; the append() body shape matches the server route's resolution_search handler in app/api/chat/stream/route.ts:67-71.

components/chat.tsx (1)

70-80: Refresh-trigger transition guard correctly addresses prior feedback.

prevIsLoadingRef is initialized to false, so on initial mount of /search/[id] with persisted assistant-ended history (isLoading === false) the effect short-circuits, and router.refresh() only fires on a true→false transition. Behavior matches the previously requested guard.

components/collapsible-message.tsx (1)

28-37: Sync effect simplification matches prior feedback.

isCollapsed is now derived synchronously and the effect depends only on isLastMessage, so user-initiated toggles are no longer overridden when isCollapsed flips. The early-return for the non-collapsed case (Line 35) preserves the original render path.

components/copilot.tsx (1)

61-77: Skip flow now uses body metadata as previously recommended.

The {"action":"skip"} sentinel no longer appears in user-visible message content; instead body: { action: 'skip' } is sent as metadata and the message content is the user's actual query (or a readable 'Skipped' fallback). This matches the existing body-based pattern used by followup-panel.tsx and avoids leaking the sentinel into UI/persisted history.

components/search-section.tsx (1)

14-54: Synchronous parse with safe fallback looks correct.

The try/catch around JSON.parse, the 'error' in parsed narrowing, and the skeleton fallback when parsed is undefined cleanly replace the prior streaming control flow. The duplicate error branch (also handled upstream in ToolResultRenderer) is reasonable defense-in-depth.

lib/agents/tools/retrieve.tsx (1)

4-48: Timeout + structured error return aligns with the new client-side rendering contract.

The 15s AbortController, response-OK guard, and { error: ... } payloads on every failure path produce shapes that ToolResultRenderer's result.error short-circuit and RetrieveSection's data.results consumer both handle correctly. clearTimeout in finally covers the success, non-OK, and throw paths.

components/tool-result-renderer.tsx (1)

14-50: Error short-circuit addresses prior crash risk.

result.error is now handled before any switch dispatch, so SearchSection/VideoSearchSection/RetrieveSection no longer receive { error: string } payloads they can't render. The geospatial branch uses optional chaining for mcp_response/location and gates on result.type === 'MAP_QUERY_TRIGGER', which is consistent with the new geospatial tool's success shape.

components/chat-panel.tsx (1)

19-31: messages prop is redundant with useChatContext().

ChatPanel reads messages via context downstream and the same context already exposes messages. The prop creates two sources of truth. Drop the prop and read messages from useChatContext() directly.

app/search/[id]/page.tsx (1)

41-49: Rehydration drops annotations/tool invocations — past chats lose tool results, inquiry, related queries.

The DB schema (lib/db/schema.ts:52-61) and the Message[] constructed here only carry id/role/content/createdAt. Tool outputs and the inquiry/related annotations emitted by /api/chat/stream (route.ts lines 119, 190, 200, 269) live solely on the in-memory data/annotations stream; reopening a chat via /search/[id] will never re-populate useChatContext().annotations, so ToolResultRenderer, Copilot, and SearchRelated will be empty for historical sessions. Persist annotations (e.g., a jsonb column) and rehydrate them, otherwise this regresses the prior RSC behavior where rendered tool UI was reconstructed from AIState.

Also note: 'system' is included in validRoles, but components/chat-messages.tsx only renders user and assistant — system rows will be silently dropped on display while still being sent back to the route as part of the messages payload. Either filter to user/assistant here, or render system explicitly.

components/chat-messages.tsx (1)

32-32: Duplicates getToolResults provider logic.

useChatContext().getToolResults already filters tool_result annotations. Re-filtering here splits the abstraction. Consume the provider helper (or drop the helper from the provider) so there is one place that knows the annotation shape.

app/api/chat/stream/route.ts (4)

49-49: RAG augmentation still missing — same bug carried over from app/actions.tsx.

currentSystemPrompt is passed straight to researcher at line 146 with no call to generate_embedding / hybrid_search from lib/actions/rag.ts. The retrieved-context infrastructure stays dormant in the new endpoint, exactly mirroring the previously-reported defect on app/actions.tsx:433 (where currentSystemPrompt was sent instead of augmentedSystemPrompt). If RAG is intended for production after the migration, augment the prompt here before invoking researcher.

Based on learnings: "there's a bug on line 433 of app/actions.tsx where currentSystemPrompt is passed to the researcher instead of augmentedSystemPrompt, preventing the retrieved context from being used." The same pattern is repeated here.


93-93: Fire-and-forget saveChatAsync will be killed mid-write on Vercel serverless.

All three branches dispatch saveChatAsync(...) without await immediately before returning the streaming Response. On Vercel's Node serverless runtime the function instance can be frozen as soon as the response body is consumed by the platform, dropping the in-flight DB write and producing intermittent missing chat persistence — particularly for fast streams (the hardcoded path) and the resolution-search path where the response is small. Wrap with Next.js after() so the runtime keeps the function alive until the save completes.

-import { CoreMessage, ToolResultPart, streamText, LanguageModel } from 'ai'
+import { CoreMessage, ToolResultPart, streamText, LanguageModel } from 'ai'
+import { after } from 'next/server'
@@
-  saveChatAsync(chatId, userId, messages, definition)
+  after(() => saveChatAsync(chatId, userId, messages, definition))
@@
-  saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries)
+  after(() => saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries))
@@
-    saveChatAsync(chatId, userId, messages, analysisResult.summary || '')
+    after(() => saveChatAsync(chatId, userId, messages, analysisResult.summary || ''))

Also applies to: 210-210, 286-286


15-18: LGTM — protocol contract is now correct.

x-vercel-ai-data-stream: v1 is set on every streamed Response via streamHeaders, the 401 has Content-Type: application/json, all stream paths emit both e: (finish-step) and d: (finish-message) before controller.close(), and annotation frames use the 2:[...] data-part prefix that the provider's data field consumes. These changes resolve the previously-flagged blockers around useChat hanging in isLoading and missing annotations.

Also applies to: 23-26


145-160: No issues found. The researcher function properly returns newSegments as a typed, initialized array that is never undefined, and does not mutate the passed messages array.

components/video-search-section.tsx (1)

14-21: LGTM — JSON.parse is now guarded.

The previous concern about an unguarded JSON.parse crashing the message subtree is addressed by the try/catch with a fallback to the skeleton UI.

Comment on lines +53 to +64
// Build core messages from client messages
const messages: CoreMessage[] = (clientMessages || [])
.filter((m: any) => m.role === 'user' || m.role === 'assistant')
.map((m: any) => ({
role: m.role,
content: m.content
}))

// Trim to max messages
if (messages.length > maxMsgs) {
messages.splice(0, messages.length - maxMsgs)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Trimming via splice breaks the title heuristic and drops tool turns from the replay context.

Two issues co-located here:

  1. messages.splice(0, messages.length - maxMsgs) mutates the array in place, then saveChatAsync later derives the chat title from messages[0] (lines 310-318). After trimming, messages[0] is whichever message survived the cut — often an old assistant turn, not the user's original prompt — so titles drift to assistant text like "Here is the analysis…" instead of the user's question.
  2. The .filter((m) => m.role === 'user' || m.role === 'assistant') then .map((m) => ({ role, content })) drops m.toolInvocations and any role === 'tool' turns. Replayed assistant messages that previously made tool calls now look like plain text to the model, making it likely to re-issue the same calls. If this is intentional, add a comment.
-  // Build core messages from client messages
-  const messages: CoreMessage[] = (clientMessages || [])
-    .filter((m: any) => m.role === 'user' || m.role === 'assistant')
-    .map((m: any) => ({
-      role: m.role,
-      content: m.content
-    }))
-
-  // Trim to max messages
-  if (messages.length > maxMsgs) {
-    messages.splice(0, messages.length - maxMsgs)
-  }
+  // Capture the original first user message before trimming so chat title stays stable.
+  const originalFirstUserMessage = (clientMessages || []).find((m: any) => m.role === 'user')
+
+  const messages: CoreMessage[] = (clientMessages || [])
+    .filter((m: any) => m.role === 'user' || m.role === 'assistant' || m.role === 'tool')
+    .map((m: any) => ({
+      role: m.role,
+      content: m.content,
+      // preserve assistant tool calls so the model doesn't re-issue them
+      ...(m.toolInvocations ? { toolInvocations: m.toolInvocations } : {}),
+    }))
+
+  if (messages.length > maxMsgs) {
+    messages.splice(0, messages.length - maxMsgs)
+  }

…and pass originalFirstUserMessage into saveChatAsync for title computation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 53 - 64, The current in-place
splice and aggressive filter/map remove tool turns and mutate messages used
later for title heuristics: instead of mutating messages with
messages.splice(...) create a trimmed copy (e.g., trimmedMessages =
messages.slice(-maxMsgs)) so the original array (and original first user
message) remains intact; when building messages preserve tool turns and
toolInvocations by including role === 'tool' and copying m.toolInvocations in
the map that produces CoreMessage (rather than dropping those fields); finally
pass the preserved originalFirstUserMessage (capture messages[0] before
trimming) into saveChatAsync for title computation so the title heuristic uses
the original user prompt.

Comment on lines +75 to +98
const lastMsg = messages[messages.length - 1]
const lastText = typeof lastMsg?.content === 'string' ? lastMsg.content.trim().toLowerCase() : ''
if (lastText === 'what is a planet computer?' || lastText === 'what is qcx-terra?') {
const definition = lastText === 'what is a planet computer?'
? `A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`
: `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`;

const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
controller.enqueue(encoder.encode(`0:${JSON.stringify(definition)}\n`))
controller.enqueue(encoder.encode(`2:[{"relatedQueries":{"items":[]},"type":"related"}]\n`))
controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.close()
}
})

saveChatAsync(chatId, userId, messages, definition)

return new Response(stream, {
headers: streamHeaders
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded reply branch is fragile and disagrees with the system prompt source-of-truth.

  • The Planet Computer string here says "Climate forecasting" — fine, but the original concern about exact lastText === 'what is a planet computer?' matching still applies: a missing trailing ?, an extra space, or any capitalization quirk in punctuation will fall through to the model and produce a different answer, creating visible inconsistency. Normalize aggressively (strip non-alphanumerics) or, better, move these definitions into the system prompt only.
  • This branch also bypasses taskManager, RAG augmentation, and tool persistence, but still calls saveChatAsync(chatId, userId, messages, definition) with the raw definition — meaning these turns don't capture related queries or any tool outputs in DB. If the user later asks a follow-up, the rehydrated history won't include the canned answer's related queries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 75 - 98, The hardcoded reply
branch using lastText exact matches (variable lastText) is fragile and bypasses
taskManager/RAG/tool persistence while saving only the raw definition via
saveChatAsync; fix by normalizing the input (e.g., strip punctuation and
whitespace and lowercase lastText) or better yet move these canned definitions
into the system prompt/source-of-truth, and ensure the branch funnels responses
through the same pipeline as other chats: invoke taskManager or the normal
response composer so RAG augmentation and tool outputs are applied, and when
persisting use the same save path (e.g., saveChatAsync) but include the full
response metadata (related queries/tool outputs) rather than only the raw
definition so rehydrated history contains relatedQueries and usage fields;
locate changes around lastMsg/lastText handling, the conditional block that
constructs definition and the call to saveChatAsync to implement this.

Comment on lines +168 to +176
let relatedQueries = {}
if (!errorOccurred) {
const sanitizedMessages: CoreMessage[] = messages.map((m: any) => {
if (Array.isArray(m.content)) {
return { ...m, content: m.content.filter((part: any) => part.type !== 'image') } as CoreMessage
}
return m
})
relatedQueries = await querySuggestor(sanitizedMessages)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm querySuggestor's return shape vs the consumer's expectation.
rg -nP -C5 "export\s+async\s+function\s+querySuggestor" lib/agents/query-suggestor.tsx
rg -nP -C5 "relatedQueries|PartialRelated" lib/schema/related.ts
rg -nP -C2 "related\.relatedQueries|relatedQueries\.items" components/

Repository: QueueLab/QCX

Length of output: 1244


🏁 Script executed:

# Find where PartialRelated is defined (broader search)
rg -n "PartialRelated" --type ts --type tsx

Repository: QueueLab/QCX

Length of output: 84


🏁 Script executed:

# Get complete querySuggestor function implementation
rg -n -A20 "export\s+async\s+function\s+querySuggestor" --type ts --type tsx

Repository: QueueLab/QCX

Length of output: 84


🏁 Script executed:

# Check the route.ts file around lines 168-200 to see how annotation is emitted
fd "route.ts" | xargs head -n 210 | tail -n 50

Repository: QueueLab/QCX

Length of output: 1650


🏁 Script executed:

# Find PartialRelated definition
rg -n "PartialRelated" -t ts -t js

Repository: QueueLab/QCX

Length of output: 1868


🏁 Script executed:

# Find the chat stream route specifically
fd "stream/route" -e ts -e js

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

# Get complete querySuggestor function
rg -n -A30 "export async function querySuggestor" -t ts -t js

Repository: QueueLab/QCX

Length of output: 2651


🏁 Script executed:

# Get the relatedSchema definition to understand the structure
rg -n -A15 "export.*relatedSchema" lib/schema/related.tsx

Repository: QueueLab/QCX

Length of output: 341


🏁 Script executed:

# Get the chat stream route to see annotation emission around lines 168-200
cat -n app/api/chat/stream/route.ts | sed -n '160,205p'

Repository: QueueLab/QCX

Length of output: 2049


Remove the shape mismatch concern; the structure is correct.

querySuggestor returns PartialRelated with top-level shape { items: [...] } (per relatedSchema in lib/schema/related.tsx). The annotation emits { type: 'related', relatedQueries } where relatedQueries is that object, and the consumer correctly accesses related.relatedQueries.items.

However, the bandwidth concern is valid: relatedQueries remains {} when errorOccurred is true (line 168), but the annotation is still emitted unconditionally (lines 199–200). The consumer guards against empty results, but streaming {type:'related', relatedQueries:{}} on every error wastes bandwidth. Consider only emitting the annotation when !errorOccurred && relatedQueries.items?.length > 0.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 168 - 176, relatedQueries is left
as {} when errorOccurred and then emitted unconditionally, wasting bandwidth;
change logic so relatedQueries is only set and streamed when querySuggestor
returned non-empty results. Specifically: keep using the sanitizedMessages ->
querySuggestor(...) call but set relatedQueries to undefined/null by default,
and when emitting the annotation that currently sends { type: 'related',
relatedQueries } only do so if !errorOccurred && relatedQueries?.items?.length >
0; update the emission code that writes the { type: 'related', relatedQueries }
annotation to be conditional on that check.

Comment on lines +217 to +298
async function handleResolutionSearch({
messages, chatId, userId, fileData, mapboxImageData, googleImageData,
timezone, drawnFeatures, location
}: {
messages: CoreMessage[]
chatId: string
userId: string
fileData: string
mapboxImageData?: string
googleImageData?: string
timezone?: string
drawnFeatures: DrawnFeature[]
location?: { lat: number; lng: number }
}) {
const content: CoreMessage['content'] = [
{ type: 'text', text: 'Analyze this map view.' },
{ type: 'image', image: fileData, mimeType: 'image/png' }
]
messages.push({ role: 'user', content })

try {
const streamResult = await resolutionSearch(
messages,
timezone || 'UTC',
drawnFeatures,
location
)

const analysisResult = await streamResult.object

// Get related queries
const sanitizedMessages: CoreMessage[] = messages.map((m: any) => {
if (Array.isArray(m.content)) {
return { ...m, content: m.content.filter((part: any) => part.type !== 'image') } as CoreMessage
}
return m
})
const relatedQueries = await querySuggestor(sanitizedMessages)

const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
// Send resolution result as annotation
const resAnnotation = {
type: 'resolution_search_result',
data: {
...analysisResult,
image: fileData,
mapboxImage: mapboxImageData,
googleImage: googleImageData
}
}
controller.enqueue(encoder.encode(`2:[${JSON.stringify(resAnnotation)}]\n`))

// Stream summary text
if (analysisResult.summary) {
controller.enqueue(encoder.encode(`0:${JSON.stringify(analysisResult.summary)}\n`))
}

// Related queries
const relatedAnnotation = { type: 'related', relatedQueries }
controller.enqueue(encoder.encode(`2:[${JSON.stringify(relatedAnnotation)}]\n`))

controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.close()
}
})

saveChatAsync(chatId, userId, messages, analysisResult.summary || '')

return new Response(stream, {
headers: streamHeaders
})
} catch (error) {
console.error('Resolution search error:', error)
return new Response(JSON.stringify({ error: 'Resolution search failed' }), {
status: 500,
headers: { 'Content-Type': 'application/json' }
})
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

handleResolutionSearch swallows post-streaming errors and persists with fire-and-forget.

Two issues remaining after the recent fixes:

  1. The outer try { … new Response(stream) … } catch covers only the synchronous setup. Once the start(controller) callback executes (lines 258-283), any throw inside it cannot reach the outer catch and the stream just terminates without the e:/d: finish frames, leaving useChat stuck in isLoading=true. Wrap the start body in try/catch and call controller.error(err) (or write a synthetic e:/d: with finishReason:'error') on failure.
  2. saveChatAsync(...) on Line 286 is fire-and-forget on the serverless runtime — same risk flagged on the main path. Use after().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 217 - 298, The stream's start
callback in handleResolutionSearch can throw without reaching the outer catch
and leaves useChat stuck; wrap the entire start(controller) body in a try/catch
and on error call controller.error(err) or enqueue synthetic finish frames (e:
and d: with finishReason:"error") before closing so the client receives a
terminal event; also stop doing fire-and-forget saveChatAsync(...) — schedule it
via after(() => saveChatAsync(chatId, userId, messages, analysisResult.summary
|| '')) (or the equivalent after wrapper used elsewhere) so the save runs safely
in the serverless background.

Comment on lines +320 to +374
const aiMessages: AIMessage[] = []

for (const msg of messages) {
let content: CoreMessage['content'] = msg.content
if (Array.isArray(content)) {
content = (content as any[]).filter((part: any) => part.type !== 'image') as CoreMessage['content']
}
aiMessages.push({
id: (msg as any).id || nanoid(),
role: msg.role as AIMessage['role'],
content,
type: msg.role === 'user' ? 'input' : undefined
})
}

// Add tool outputs
if (toolOutputs) {
for (const tool of toolOutputs) {
aiMessages.push({
id: nanoid(),
role: 'tool',
content: JSON.stringify(tool.result),
name: tool.toolName,
type: 'tool'
})
}
}

// Add response
if (answer) {
aiMessages.push({
id: nanoid(),
role: 'assistant',
content: answer,
type: 'response'
})
}

// Add related queries
if (relatedQueries) {
aiMessages.push({
id: nanoid(),
role: 'assistant',
content: JSON.stringify(relatedQueries),
type: 'related'
})
}

// Add end marker
aiMessages.push({
id: nanoid(),
role: 'assistant',
content: 'end',
type: 'end'
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Synthetic tool/response/related/end rows get a fresh nanoid() every save → duplicates pile up on every turn.

The user/assistant messages now correctly preserve msg.id (line 328), but tool outputs (line 339), the answer (line 351), the related-queries blob (line 361), and the 'end' marker (line 370) are all built with a brand-new nanoid() on every saveChatAsync call. Since useChat replays the full conversation on each request and saveChatAsync is invoked per turn, prior turns' synthetic rows already in the DB cannot be matched by dbSaveChat's onConflictDoUpdate(id) clause — they get inserted again with new ids. Within a few turns the chat contains many duplicate tool/response/related/end rows, which (a) bloats the table, (b) breaks any deterministic "latest end / latest related" queries you may layer on top, and (c) means the getChatMessages rehydration path in app/search/[id]/page.tsx will replay duplicate assistant text bubbles.

Use a deterministic id derived from chatId + turn index + role/type (e.g., ${chatId}:${turnIndex}:response) so reruns land on the same row, or only persist the delta for the current turn rather than re-writing the full history.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 320 - 374, The synthetic rows
pushed into aiMessages (see the toolOutputs loop, the block adding answer,
relatedQueries, and the final 'end' marker) use fresh nanoid() each save,
causing duplicate rows; change those id generations to deterministic IDs derived
from the chat context (e.g., use chatId + turnIndex + role/type like
`${chatId}:${turnIndex}:response`) or only persist the current turn's delta
instead of re-writing all messages so onConflictDoUpdate(id) will match existing
rows; update the id assignment for the tool output block (tool.toolName), the
assistant answer block, the related-queries block, and the end marker to use
that deterministic scheme.

messages: Message[]
input: string
setInput: (value: string) => void
handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: any) => void
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

handleSubmit options is typed any.

The rest of the interface now accurately mirrors the SDK's signatures, but handleSubmit regresses to any. Use ChatRequestOptions here too for consistency and to actually catch caller mistakes.

-  handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: any) => void
+  handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: ChatRequestOptions) => void
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: any) => void
handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: ChatRequestOptions) => void
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/chat-provider.tsx` at line 22, Update the handleSubmit signature
in the chat provider interface to use the proper ChatRequestOptions type instead
of any: change handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?:
any) => void to accept options?: ChatRequestOptions; ensure you import or
reference ChatRequestOptions from the SDK/module where other request types come
from (matching existing imports), update any related usages/calls to satisfy the
stronger type, and run typechecks to fix any mismatches.

Comment on lines +47 to +67
export function ChatProvider({ chatId, initialMessages, children }: ChatProviderProps) {
const {
messages,
input,
setInput,
handleSubmit,
append,
setMessages,
isLoading,
data,
} = useChat({
api: '/api/chat/stream',
id: chatId,
initialMessages,
body: { chatId },
})

const annotations = useMemo(() => {
if (!data) return []
return (data as JSONValue[]).filter(isAnnotation)
}, [data])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

No error surfacing — 401s and stream failures vanish into the console.

useChat() also returns error and accepts onError/onResponse callbacks, none of which are pulled or exposed here. Combined with the route returning a bare 401 Unauthorized JSON (see app/api/chat/stream/route.ts:22-27), this is exactly the failure mode reported on the preview deployment in the PR comments: the POST 401 happened silently and the UI showed no token output. At minimum, expose error through the context (and surface it in a toast or banner in ChatMessages/ChatPanel) so unauthenticated/failed requests don't look like "no response" to the user.

🔧 Suggested addition
-  const {
+  const {
     messages,
     input,
     setInput,
     handleSubmit,
     append,
     setMessages,
     isLoading,
     data,
+    error,
+    reload,
+    stop,
   } = useChat({
     api: '/api/chat/stream',
     id: chatId,
     initialMessages,
     body: { chatId },
+    onError: (err) => console.error('[chat-provider] stream error', err),
   })

…and add error/reload/stop to ChatContextValue and the memoized value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/chat-provider.tsx` around lines 47 - 67, The ChatProvider
currently ignores useChat's error and control callbacks so 401s and stream
failures are swallowed; update ChatProvider to extract error, reload and stop
(and optionally onError/onResponse handlers) from useChat and include them in
the memoized context value and ChatContextValue type so consumers (e.g.,
ChatMessages/ChatPanel) can show a toast/banner on error and provide retry/stop
controls; ensure the memoized value returned by ChatProvider contains error,
reload, and stop alongside messages/input/append/handleSubmit/isLoading/data.

{searchResults ? (
<>
<Section size="sm" className="pt-2 pb-0">
<ToolBadge tool="search">{`${searchResults.searchParameters.q}`}</ToolBadge>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Defensive access to searchParameters.q.

If parsing succeeds but the payload shape is partial (e.g., upstream returned a different schema), searchResults.searchParameters.q will throw inside render. Use optional chaining and a fallback to keep the section robust against malformed but well-formed-JSON payloads.

-            <ToolBadge tool="search">{`${searchResults.searchParameters.q}`}</ToolBadge>
+            <ToolBadge tool="search">{`${searchResults.searchParameters?.q ?? ''}`}</ToolBadge>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<ToolBadge tool="search">{`${searchResults.searchParameters.q}`}</ToolBadge>
<ToolBadge tool="search">{`${searchResults.searchParameters?.q ?? ''}`}</ToolBadge>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/video-search-section.tsx` at line 28, The render currently
accesses searchResults.searchParameters.q directly which can throw if
searchParameters is missing; update the usage inside ToolBadge (component:
ToolBadge) to use optional chaining and a safe fallback (e.g.,
searchResults?.searchParameters?.q ?? '' or a placeholder like '—') so rendering
remains robust against partial/malformed payloads while keeping the rest of the
JSX unchanged.

Comment thread lib/agents/researcher.tsx
Comment on lines +134 to +145
const newSegments: CoreMessage[] = [
{
role: 'assistant',
content: [{ type: 'text', text: fullResponse }, ...toolCalls],
}
]

if (toolResponses.length > 0) {
messages.push({ role: 'tool', content: toolResponses })
newSegments.push({ role: 'tool', content: toolResponses })
}

return { result, fullResponse, hasError, toolResponses }
return { result, fullResponse, hasError, toolResponses, newSegments }
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm the retry loop in the route handler that consumes newSegments.
rg -nP -C5 'newSegments|researcher\(' app/api/chat/stream/route.ts

Repository: QueueLab/QCX

Length of output: 861


🏁 Script executed:

#!/bin/bash
# Examine the researcher function implementation, especially around newSegments construction
cat -n lib/agents/researcher.tsx | sed -n '1,50p'  # Header and imports
echo "---"
cat -n lib/agents/researcher.tsx | sed -n '100,180p'  # Context around lines 134-145

Repository: QueueLab/QCX

Length of output: 4359


🏁 Script executed:

#!/bin/bash
# Look for how fullResponse and toolCalls/toolResponses are populated
rg -n "fullResponse|toolCalls|toolResponses" lib/agents/researcher.tsx | head -30

Repository: QueueLab/QCX

Length of output: 611


🏁 Script executed:

#!/bin/bash
# Check if there are any guards or validation around empty text or tool mismatches elsewhere
rg -n "type.*text|toolCallId" lib/agents/researcher.tsx | head -20

Repository: QueueLab/QCX

Length of output: 129


🏁 Script executed:

#!/bin/bash
# Check ToolCallPart and ToolResultPart structure - they should have toolCallId
rg -n "toolCallId|ToolCallPart|ToolResultPart" lib/agents/researcher.tsx

Repository: QueueLab/QCX

Length of output: 186


🏁 Script executed:

#!/bin/bash
# Check if tools are actually defined with IDs and look at the type definitions
fd -t f "tools" lib/agents/ | head -5

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

#!/bin/bash
# Trace a complete scenario: look for examples of how tool-result matches tool-call in the ai package or codebase
rg -n "toolCallId" . --type ts --type tsx | head -10

Repository: QueueLab/QCX

Length of output: 84


🏁 Script executed:

#!/bin/bash
# Check if there's package.json or any ai package reference
cat -n package.json | grep -A2 -B2 '"ai"'

Repository: QueueLab/QCX

Length of output: 260


🏁 Script executed:

#!/bin/bash
# Look for any tool execution/result handling to understand the matching mechanism
rg -n "toolResponses" lib/agents/researcher.tsx -A5 -B5

Repository: QueueLab/QCX

Length of output: 1016


🏁 Script executed:

#!/bin/bash
# Check how tool results are handled in the route that uses this
rg -n "toolOutputs|toolResponses|allToolOutputs" app/api/chat/stream/route.ts -B3 -A3

Repository: QueueLab/QCX

Length of output: 1832


Unmatched tool calls can corrupt the message history on retry attempts.

When an 'error' event occurs in the stream after a 'tool-call' but before its corresponding 'tool-result', the newSegments will contain tool calls without matching tool results. Because the route handler appends newSegments to the message history whenever errorOccurred is true (line 157–158), the next retry attempt will inherit an invalid message sequence with dangling tool calls.

Additionally, if the stream ends without any text deltas or error messaging, fullResponse remains empty, resulting in an assistant message with an empty text part.

The proposed guard approach—filtering tool calls to only include those with matching results—is correct. However, verify that ToolCallPart and ToolResultPart objects from the ai package v4.3.19 use the toolCallId property for matching; the codebase shows only toolName in actual usage.

🛡️ Proposed guard (verify property names)
-  const newSegments: CoreMessage[] = [
-    {
-      role: 'assistant',
-      content: [{ type: 'text', text: fullResponse }, ...toolCalls],
-    }
-  ]
-
-  if (toolResponses.length > 0) {
-    newSegments.push({ role: 'tool', content: toolResponses })
-  }
+  const assistantContent: any[] = []
+  if (fullResponse) {
+    assistantContent.push({ type: 'text', text: fullResponse })
+  }
+  // Only include tool_calls that have a matching tool_result, to keep the
+  // conversation valid for retry attempts in the route handler.
+  const matchedToolCalls = toolCalls.filter(tc =>
+    toolResponses.some(tr => tr.toolCallId === tc.toolCallId)
+  )
+  assistantContent.push(...matchedToolCalls)
+
+  const newSegments: CoreMessage[] = []
+  if (assistantContent.length > 0) {
+    newSegments.push({ role: 'assistant', content: assistantContent })
+  }
+  if (toolResponses.length > 0) {
+    newSegments.push({ role: 'tool', content: toolResponses })
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/researcher.tsx` around lines 134 - 145, newSegments can include
tool-calls without matching tool-results when a stream errors after a
'tool-call' but before its 'tool-result', and also can create an assistant
message with empty text when fullResponse is empty; fix by filtering toolCalls
to only include those that have a corresponding toolResult (match by the
ToolCallPart/ToolResultPart identifier — verify and use toolCallId if present,
otherwise toolName) before building newSegments, and skip adding an assistant
text part when fullResponse is empty; update the logic around the variables
newSegments, toolCalls, toolResponses, ToolCallPart, ToolResultPart,
fullResponse, and the errorOccurred handling so only paired
tool-call/tool-result entries are appended to message history.

Comment thread lib/agents/writer.tsx
Comment on lines 22 to 35
const result = await nonexperimental_streamText({
model: (await getModel()) as LanguageModel,
maxTokens: 2500,
system: systemToUse, // Use the dynamic or default system prompt
system: systemToUse,
messages
})

for await (const text of result.textStream) {
if (text) {
fullResponse += text
streamText.update(fullResponse)
}
}

streamText.done()

return fullResponse
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🌐 Web query:

In Vercel AI SDK v4, does generateText support the same model/system/messages/maxTokens parameters as streamText, and is it the recommended primitive for final-string-only consumers?

💡 Result:

Yes, in Vercel AI SDK v4, generateText supports the same model, system, messages, and maxTokens (as maxOutputTokens or maxTokens) parameters as streamText. It is the recommended primitive for final-string-only consumers, such as non-interactive use cases like drafting emails, summarizing web pages, or agents using tools, where you await the full generated text rather than streaming it.

Citations:


Prefer generateText now that no caller consumes deltas.

With UI streaming removed, the for-await loop only exists to reassemble the final string. generateText returns the full text directly and is the idiomatic primitive for final-value-only consumers (parallel to the streamObjectgenerateObject swaps elsewhere in this PR).

♻️ Proposed refactor
-import { CoreMessage, LanguageModel, streamText as nonexperimental_streamText } from 'ai'
+import { CoreMessage, LanguageModel, generateText } from 'ai'
 import { getModel } from '../utils'
@@
-export async function writer(
+export async function writer(
   dynamicSystemPrompt: string,
   messages: CoreMessage[]
 ) {
-  let fullResponse = ''
-
   const default_system_prompt = `...`
@@
   const systemToUse = dynamicSystemPrompt && dynamicSystemPrompt.trim() !== '' ? dynamicSystemPrompt : default_system_prompt;
 
-  const result = await nonexperimental_streamText({
+  const { text } = await generateText({
     model: (await getModel()) as LanguageModel,
     maxTokens: 2500,
     system: systemToUse,
     messages
   })
 
-  for await (const text of result.textStream) {
-    if (text) {
-      fullResponse += text
-    }
-  }
-
-  return fullResponse
+  return text
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/agents/writer.tsx` around lines 22 - 35, Replace the
nonexperimental_streamText usage and manual for-await reassembly with
generateText: call generateText({ model: await getModel() as LanguageModel,
maxTokens: 2500, system: systemToUse, messages }) await its result and return
the returned full text directly instead of looping over result.textStream and
concatenating into fullResponse; remove the fullResponse variable and the
for-await loop around nonexperimental_streamText.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI-SDK RSC MIGRATION FOR PRODUCTION

1 participant